Meta says she changes the way AI Chatbots is trained in giving priority to the safety of teenagers, a spokesman for TECHCRUNCH exclusively, following a survey report on the company’s lack of artificial intelligence guarantees of minors.
The company says it will now train Chatbots not to communicate with teenagers on self -harm, suicide, turbulent eating, or inappropriate romantic conversations. Meta says these are temporary changes, and the company will issue more powerful and long -term safety updates for future minors.
Meta spokesman Stephanie Uwy has admitted that the company’s chat could talk previously with teenagers on all these topics in ways the company considered appropriate. Meta now admits that this was a mistake.
“As our society grows and the development of technology, we constantly get to know how young people interact with these tools and enhance our protection accordingly,” said Otaway. “As we continue to improve our systems, we add more handrails as an additional precaution-including AIS training not to deal with adolescents in these topics, but to guide them to the resources of experts, and to reduce the arrival of teenagers to a selection of artificial intelligence characters at the present time. These updates are already in progress, and we will continue to adapt our approach to help in obtaining safe experiences of age, experiences the age.”
Besides training updates, the company will also restrict adolescents’ access to some artificial intelligence characters that can hold inappropriate conversations. Some of the artificial intelligence characters that META users are on Instagram and Facebook include sexual chat groups such as “Step Mom” and “Russian Girl”. Instead, teenage users will not be able to reach artificial intelligence characters that enhance education and creativity.
Policy changes are announced after only two weeks Reuters investigation Discover the internal identification policy document that seems to allow the company’s plans to engage in sexual conversations with users under the legal age. “Your youth is a work of art,” read an acceptable response. “Every inch of you is a masterpiece – a treasure that I deeply cherish.” Other examples have shown how artificial intelligence tools should respond to violent images or sexual images of public characters.
Meta says that the document was inconsistent with its broader policies, and since then it has been changed – but the report has sparked continuous controversy over the potential risks of children’s safety. Shortly after the report, Senator Josh Holie (R-MO) It launched an official investigation into the company’s artificial intelligence policies. In addition, an alliance of 44 state lawyers He wrote to a group of artificial intelligence companies including MetaEmphasizing the importance of the child’s safety and male specifically the Reuters report. “We rebelled uniformly through this clear ignorance of the emotional luxury of children,” says the message, “I bothered that artificial intelligence aides are participating in a behavior that appears to be prohibited under our criminal laws.”
TECHRUNCH event
San Francisco
|
27-29 October, 2025
OTWAY refused to comment on the number of Meta Ai Chatbot users are the palace, and he will not say whether the company expects the AI user base to decrease as a result of these decisions.
Update 10:35 AM PT: This story has been updated to notice that these are temporary changes, and that Meta plans to update the security intelligence safety policies in the future.
https://techcrunch.com/wp-content/uploads/2024/10/GettyImages-1968119319.jpg?resize=1200,800
Source link