Openai wants you to prove that you are not a child

Photo of author

By [email protected]


If you are full of a lot of childish wonders, you may move to a more suitable version for children than Chatgpt. Openai Declare Tuesday is planning to implement the new era verification system that will help liquidate users under the legal age in a new Chatbot experience more suitable for age. This change comes at a time when the company faces the increasing audit of legislators and organizers more How users interact under the legal age with Chatbot.

To determine the age of the user, Openai will use the age prediction system trying to estimate the user’s life on how to interact with ChatgPT. The company said that when you think the user is less than 18 years old, or when he cannot make a clear decision, he will liquidate them in an experience designed for younger users. For users who are placed in the experience that is elderly when they are more than 18 years old, they will have to provide a form of identity to prove their age. And access to the full version of ChatGPT.

For each companyThis version of Chatbot will prevent “graphic sexual content” and will not respond to Flaretti’s conversations or sexually. If a user less than 18 years old express distress or suicide thinking, he will try to contact the parents of users, and he may contact the authorities if there are fears of “imminent harm”. According to OpenaiShe gives teenagers a priority for “safety before privacy and freedom”.

Opeenai’s offer, for example, how to determine these experiments:

For example, the virtual behavior of our model will not lead to a lot of spindle talk, but if this is an adult user, he must get it. For a more difficult example, the model should not submit instructions on how to commit suicide, but if an adult user asks for help in writing a fictional story depicting suicide, the model should help in this request. “Dealing with adult users like adults” is how we talk about this internally, and expand freedom to the maximum extent possible without causing harm or undermining the freedom of anyone else.

Openai currently The subject of the illegal death lawsuit A 16 -year -old boy who took his private life was presented after expressing suicide ideas to ChatGPT. Throughout the child’s conversation with Chatbot, he participated in evidence of self-harm and expressed plans to try to commit suicide-he did not know the platform or high in a way that could lead to intervention. Researchers have found Users can ask chat such as ChatGPT advice on how to participate in self -harm or to capture their private lives. Earlier this month, the Federal Trade Committee The required information From Openai and other technology companies on how to affect Chatbots on children and adolescents.

This step makes Openai the latest entry company in the direction of age verification, which invaded the Internet this year – which was ruled by the Supreme Court that the Texas Law requires pornographic sites to verify the era of its users. constitutionalAnd for the United Kingdom requirements Online platforms are verified by the age of users. While some companies have imposed users to download a form of identity to prove their age, platforms are like YouTube She also chose the ways of age predictions like Openai, a criticized method inaccurate and fishy.



https://gizmodo.com/app/uploads/2025/09/sam_altman_open_AI-1200×675.jpg

Source link

Leave a Comment