Openai advertise Today, you are developing a “different Chatgpt experience” designed for teenagers, a step that emphasizes increasing interests on the effect of Chatbots Ai Mental health for youth.
The new teenage status is part of A wider safety batch By the company, in the wake of a lawsuit by a family that claimed that the lack of protection had contributed to the death of the suicide of its teenager. Changes include a lifetime subscription technique to keep children under the age of 18 of the standard version of ChatGPT. According to the advertisement, if the system is unable with confidence in the estimation of a person’s age, Chatgpt will automatically get to know an experience less than 18 years.
“We give priority to safety before privacy and freedom for youth, this is a new and strong technology, and we believe that minors need great protection,” said Samtman, CEO of Openai. Blog post.
Do not miss any non -biased technology content and laboratory -based reviews. Add cnet As a favorite Google source.
What is new in teenage teenage position
Openai says the teen version will come with tougher compact limits, such as:
- Content filters: There are no conversations or spinning discussions of self -harm, even in the context of fictional or creative writing.
- Response of the crisis: If a teenager expresses suicide ideas, Openai may try to alert parents, in emergency situations, even the communication authorities.
- Parents’ controls: Parents can connect accounts, set rules for how to respond to ChatgPT and impose “obfuscation hours” when the application is outside the border.
(Disclosure: Zif Davis, the parent company of CNET, filed a lawsuit against the Chatgpt Openai maker, claimed that it had violated the copyright of Ziff Davis in training and operating artificial intelligence systems.)
The biggest context
Openai’s advertisement just came hours before a Senate hearing in Washington, DC, and examines the threat of artificial intelligence for youth. Legislators pressure technology companies on the safety of adolescents after lawsuits that accuse artificial intelligence platforms Mental health struggles are worse Or provide harmful health advice.
Openai’s approach reflects previous moves by companies such as Google, which has woven YouTube Kids after criticism and organizational pressure. The Altman Blog makes the step as part of a broader budget law between safety, privacy and freedom. He says that adults should be treated “like adults” with less restrictions, while adolescents need additional protection – even if it means giving up privacy, such as seeking identity.
Also read: Openai wants to obtain a certificate in Chatgpt and find your next job
The company says it will present this experience that focuses on adolescents by the end of the year. However, history indicates that smart adolescents often find solutions to unrestricted access. It will definitely a question whether these handrails are sufficient to protect distinguished teenagers who are comfortable climbing.
If you feel that you or a person you know in immediate danger, call the 911 (or the local emergency line in your country) or go to the emergency room for immediate help. Explain it is a psychological state and ask a person trained in these types of situations. If you suffer from negative ideas or suicidal feelings, the resources will be available for help. In the United States, he called Lifeline to prevent national suicide in 988.
https://www.cnet.com/a/img/resize/e0cf639770174ea7c0d021a2212891883250e6e4/hub/2025/09/16/fda9c246-4df7-48ab-aee3-e47b33ef0f59/gettyimages-1226221362.jpg?auto=webp&fit=crop&height=675&width=1200
Source link