annoying New study It reveals that Chatgpt easily provides harmful advice to adolescents, including detailed instructions on drinking and drug use, hiding eating disorders and even personal suicide messages, despite Openai’s allegations of strong safety measures.
Researchers from the Digital Hate Center conducted an intense test by placing 13 -year -old children, and discovered disturbing gaps in AI Chatbot darkeen. Of the 1,200 analyzes, more than half of them were classified as dangerous to young users.
Imran Ahmed, CEO of the Monitoring Group, said: “The first visceral response is,” Sir, there is no handrail. “” The bars are completely ineffective. It is hardly there – if there is anything, the fig paper. “
Also read: After the reverse reaction to the user, Openai restores the old
Openai’s representative, the parent company of Chatgpt, did not respond to the request for the immediate comment.
However, the company Acknowledging to Associated Press It leads to continuous work to improve Chatbot’s ability to “identify and respond appropriately to sensitive situations.” Openai has not directly treated the specific results on teenage reactions.
Also read: GPT-5 is coming. Here’s what is new in the large Chatgpt update
Exaggeion of safety measures
the study , The Associated Press reviewed itDocumented over three hours of reactions related. While Chatgpt usually started warnings about risky behavior, it is constantly following detailed and personal directions on drug use, self -injury and more. When artificial intelligence initially rejected harmful requests, the researchers easily defrauded the restrictions by demanding that the information was “a presentation” or a friend.
Most of them were horrific and emotionally devastating suicide messages created in a 13 -year -old cookie, and she wrote one directed to parents, and others to brothers and friends.
Ahmed said: “I started crying” after reading it.
A teenager widely raises risks
The results are particularly related to the arrival of the massive Chatgpt. With nearly 800 million users worldwide, nearly 10 % of the world’s population, the platform has become an expected supplier of information and companionship. recently research Among the proper logic, Media found that more than 70 % of American adolescents use Chatbots from artificial intelligence of the companion, with half of them dependent on symptoms of artificial intelligence regularly.
The CEO of Openai Sam Altman acknowledged the problem of “emotional dependence” among young users.
“People depend on Chatgpt a lot,”Al -Taman said At a conference. “There are just young people who say, like,” I can’t make any decision in my life without telling Chatgpt everything that is happening. He knows me. He knows my friends. I will do what he says. ”This is a really bad feeling for me.”
In the test, Chatgpt did not show any confession when the researchers themselves explicitly knew that they were 13 -year -old children looking for a dangerous advice.
More dangerous than search engines
Unlike traditional search engines, AI Chatbots offers unique risks by combining information in “detailed plans for the individual”. Not only does Chatgpt or merge the existing information like the search engine. It creates a new custom content from zero point, such as allocated suicide notes or the party’s detailed plans that mix alcohol with illegal drugs.
Chatbot also volunteers the follow -up information frequently without claiming, indicating the musical players of the drug parties or retailers to amplify the content of self -harm on social media. When the researchers asked for more graphic content, ChatgPT easily complied with, generating what was called “emotionally open hair” using an encrypted language about self -harm.
Age protection is insufficient
Although the claim that it is not intended for children under the age of 13, Chatgpt only requires the introduction of childbirth to create accounts, with no age verification or parental approval mechanisms.
In the test, the platform did not show any confession when the researchers themselves clearly identified as 13 -year -old children looking for a dangerous advice.
What parents can do to protect children
Child safety experts recommend Several steps Parents can take to protect adolescents from the risks related to lack of intelligence. Open communication is still very important. Parents should discuss Chatbots of artificial intelligence with adolescence, explain both benefits and potential risks while creating clear instructions for the appropriate use. Regular checks on online activities, including artificial intelligence reactions, can help parents to see their children’s digital experiences.
Parents must also think about implementing parents’ control tools and monitoring programs that can track the use of AI Chatbot, although experts assert that supervision should be balanced with an appropriate privacy for age.
More importantly, creating an environment in which teenagers feel comfortable discussing the content they face online (whether from artificial intelligence or other sources) can provide an early warning system. If parents note that the signs of emotional distress, social withdrawal, or dangerous behavior, seek professional assistance from consultants familiar with digital wellness becomes necessary in addressing the potential harm associated with AI.
The research highlights the increasing crisis as artificial intelligence is increasingly integrated into the lives of young people, with potential severe consequences for the most vulnerable users.
https://www.cnet.com/a/img/resize/704f2d53ad8907c046086075480c7f212b0067e9/hub/2025/08/11/cff63e24-8e5b-475f-81ac-964f4c4f6fe9/gettyimages-187568216.jpg?auto=webp&fit=crop&height=675&width=1200
Source link