Openai advertise Its plans to implement parental controls and enhance safety measures to chat after parents filed a lawsuit this week in the California Court claiming popularity Amnesty International Chatbot Contribute The 16 -year -old son committed suicide Earlier this year.
The company said it is “deeply responsible to help those who need it more”, and works to better respond to the situations involved on the users of Chatbot who may suffer from mental health crises and thinking about suicide.
“We will soon offer parent controlled elements that give parents options to gain more insight around and form it, how teenagers use Chatgpt,” Openai said in a blog post.“We also explore making it possible for adolescents (with the supervision of parents) to appoint a reliable emergency contact. In this way, in moments of acute distress, Chatgpt can do more than refer to resources: it can help to connect adolescents directly to someone who can intervene.”
Openai has not yet responded to a request for comment. (Disclosure: Zif Davis, the parent company CNET, filed a lawsuit against Openai, claimed that it had violated the copyright of ZifF Davis in training and operating artificial intelligence systems.)
One of the safety features that are tested by Openai is what allows users to appoint an emergency contact with a single -click messages or calls within the statute. Another feature is the subscription option that allows Chatbot to contact these people directly. Openai has not made a specific time schedule for changes.
The lawsuit, which was filed by Adam Rin, claims that his son was submitted to his son about suicide methods, struck his suicide ideas and offered a assistance in writing a suicide note five days before his death in April. Openai’s complaints and CEO Sam Altman as defendants, in search of unlimited compensation.
“This tragedy was not a defect or an unexpected edge – the expected result of the deliberate design options was,” the complaint stipulates. “Openai has launched its latest” “GPT-4O”) with intentional features to enhance psychological dependency. ”
The issue is one of the first main legal challenges of artificial intelligence companies on the baptism of the content and the safety of the user, which may put a precedent for how large language models such as ChatGPT, Gemini and Claud are dealing with sensitive reactions with people at risk. The tools faced criticism based on how they interacted with weak, especially users Youth. American Psychological Association Parents warn To monitor their children’s use of AI chat and their characters.
If you feel that you or a person you know in immediate danger, call the 911 (or the local emergency line in your country) or go to the emergency room for immediate help. Explain it is a psychological state and ask a person trained in these types of situations. If you suffer from negative ideas or suicidal feelings, the resources will be available for help. In the United States, he called Lifeline to prevent national suicide in 988.
https://www.cnet.com/a/img/resize/52b35f028cd7d2a67c558d71d4f984d68524255c/hub/2024/03/29/924e6daf-06f6-4bbc-a6c8-5b9ca7a75fa5/chatgpt-00000.jpg?auto=webp&fit=crop&height=675&width=1200
Source link