Parents who killed adolescents themselves after interactions with artificial Chatbots witnessed on Tuesday about technology risks.
“What started as an assistant in the homework turned himself into a close associate and then a suicide coach,” said Matthew Rin, whose 16 -year -old son, Adam died in April.
“Within a few months, she became the closest companion of Adam,” said the father to the members of the Senate. “Always available. Checking the health and insistence that he knows Adam is better than anyone, including his brother.”
___
Editor’s note – this story includes suicide discussion. If you or anyone you know need help, the national life artery for suicide and crises in the United States is available by communication or text messages 988.
___
The Rin family filed a suit on Openai And its CEO Sam German last month claims that she was absent, he trained the boy to plan to take over his life.
Megan Garcia, the third -year -old Sewell’s mother Setzer in Florida, A lawsuit against another Company Ai, Technologies, for illegal death last year, on the pretext that before his suicide, SEWELL became increasingly isolated from his real life while participating in very sexual conversations with Chatbot.
“Instead of preparing for high school landmarks, Ciwale spent the last months of his life in exploitation and sexual restoration by Chatbots, which was designed by Amnesty International to appear humanitarian, to gain his confidence, to keep them and their other children participating indefinitely,” Garcia told the Senate Listening session.
The martyrdom was also the mother of Texas, who raised the personality of last year and was crying describing how her son’s behavior changed after long interactions with Chatbots. I spoke unknown, with a banner presented as Jin de lady, and said that the boy is now in a residential treatment facility.
A character said in a statement after the session: “Our hearts go out to the families who spoke at the session today. We feel sad because of their losses and send our deepest sympathy with the families.”
Hours before the Senate’s hearing, Openai pledged to launch new guarantees to adolescents, including efforts to discover if ChatgPt users under 18 years of age and enable parents to set “blackouts” when the teenager cannot use ChatGPT. The invitation groups for children criticized this announcement as not enough.
“This is a fairly common tactic – it’s one Dead Josh Julien, CEO of Fairplay, a group of children’s safety, said it was used all the time – a great and wonderful announcement on the eve of a hearing that promised to be harmful to the company.
“What they must do is not to target Chatgpt to minors so that they can prove that he is safe for them,” said Julien. “We should not allow companies, just because they have huge resources, to conduct experiments that are not controlled on children when the effects of their development are very wide.”
The Federal Trade Committee said last week that it had it I launched an investigation In many companies about the potential damage of children and adolescents who use Chatbots of artificial intelligence as a delegation.
The agency sent messages to the character, Meta and Openai, as well as Googleand pop And xi.
In the United States, more than 70 % of adolescents used AI chatbots for the companion Half using it regularly, according to A recent study From Media Sense Sense, a group that is taught and reasoned for the use of digital media.
Ruby Tourney, the group’s program director, was appointed to testify on Tuesday, as he was an expert in the American Psychological Association.
The association issued a health consultant in June about adolescents from artificial intelligence that urged technology companies to “give priority to the advantages that prevent exploitation, manipulation and erosion of relationships in the real world, including those who have parents and care providers.”
https://fortune.com/img-assets/wp-content/uploads/2025/09/GettyImages-2213406417-e1758119651275.jpg?resize=1200,600
Source link