Warning: This story discusses suicide and self -harm.
Chatbot Chatbot AI Openai and Meta says they are how they respond to their technology to adolescents and other users who ask questions about suicide or show signs of mental and emotional distress.
Openai, the Chatgpt maker, said on Tuesday that he is preparing to launch new controls that enable parents to link their accounts with adolescent accounts.
Parents can choose the features that must be disabled and “receive notifications when a teenage system discovers a sharp distress,” he said, according to what he said. Blog for the company This says that the changes will enter into force this fall.
Regardless of the user’s age, the company says Chatbots will direct the most disturbing conversations to the most capable artificial intelligence models that can provide a better response.
This announcement comes a week after Adam Rene, 16, A lawsuit against Openai Its CEO Sam German, claiming that she was absent, trained the California boy to plan and took his life earlier this year.
Adam Ryan, sixteen years old, died due to suicide in April. Now his parents sue Openai and CEO Sam Altman because Chatbot claims that he “provided detailed suicide instructions to the minor.”
Meta, the parent company of Instagram, Facebook and WhatsApp, said it now prevents Chatbots from talking to teenagers about self -harm, suicide, turbulent eating and inappropriate romantic conversations, instead directs them to expert resources. Meta already provides parents’ control of adolescent accounts.
A The study was published last week In the medical magazine, psychological services found contradictions in how three Chatbots responded to the famous artificial intelligence of inquiries about suicide.
The study by Rand researchers found a need to “improve more improvement” in Chatgpt, Google’s Gemini and Anthropic’s Claude. The researchers did not study Chatbots Meta.
“It is encouraging to see Openai and Meta offer features such as parental controls and direct sensitive talks to more capable models, but these are gradual steps,” said the lead author of the study, Ryan Macbin.
“Without independent safety standards, clinical tests, and implementable standards, we are still relying on companies for self -regulation in an area where teenagers’ risks are uniquely high,” said Macbayen, the first researcher in politics in Rand.
https://i.cbc.ca/1.7607969.1755104492!/fileImage/httpImage/image.jpg_gen/derivatives/16x9_1180/chatgpt-vulnerable-teens.jpg?im=Resize%3D620
Source link