Openai to direct sensitive conversations to GPT-5, inserting parental controls

Photo of author

By [email protected]


This article was updated with a comment from the main lawyer in the illegal death lawsuit against the Renne family against Openai.

Openai He said On Tuesday, you plan to direct sensitive talks to thinking models such as GPT-5 and put the parents ’controls next month-part of a continuous response to the recent safety accidents that involve the failure of CHATGPT to discover mental distress.

The new handrails come in the aftermath of teenager suicide, who discussed self -harm and intends to end his life with Chatgpt, which even provided him with information about the specific suicide methods. Ren’s parents The illegal death lawsuit was filed Against openai.

in Blog post Last week, Openai confessed to its shortcomings in its safety systems, including failures in maintaining handrails during the extended conversations. Experts attribute these issues to Basic design elements: The models tend to verify the authenticity of the user data and the prediction algorithms with the following word, which causes the Chatbots to take the interconnected indicators instead of redirecting the possible harmful discussions.

This trend is displayed in the maximum in the case of Stein-Eric Swillberg, who was reported to be reported. Wall Street Magazine During the weekend. Uselberg, who had a history of mental illness, used Chatgpt to verify and calm the madness of greatness that was targeting it in a great plot. His illusions advanced so badly that he had ended up killing his mother and himself last month.

Openai believes that at least one solution for conversations that start from bars can be automatically re -conversations to “thinking” models.

“We recently presented a realistic router that can choose between effective chat models and thinking models based on the context of the conversation,” Openai wrote on Tuesday. Blog post. “We will soon start directing some sensitive conversations-such as when our system discovers signs of acute distress-to the thinking model, such as thinking about GPT-5, so that it can provide useful and useful responses, regardless of the first chosen model.”

Openai says that GPT-5 and O3 models are designed to spend more time thinking for a longer period and logic through the context before answering, which means it is “more resistant to the demands of hostility.”

The artificial intelligence company also said that it will offer parents ‘control elements next month, allowing parents to link their account with the adolescents’ account by inviting the email. In late July, Openai came out Study mode in ChatGPT To help students maintain critical thinking abilities, instead of clicking on Chatgpt to write their articles for them. Soon, parents will be able to control how Chatgpt responds to their child to “the rules of behavior of the appropriate forms of age, which are displayed by default.”

Parents will also be able to disrupt features such as memory and chat history, which experts say can lead to imaginary thinking and other problematic behavior, including dependency and attachment issues, promoting harmful thinking patterns, and the illusion of reading thought. In the case of Adam Rin, Chatgpt provided ways to suicide that reflects the knowledge of his hobbies, For each New York Times.

Perhaps the most important control of the parents that Openai intends to propose is that parents can receive notifications when the system discovers that the teenager is in a “sharp distress” moment.

Techcrunch Openai has requested more information about how the company is able to report moments of actual distress in the actual time, the duration spent by “the rules of the form of the appropriate model for age” by default, and whether it explores allowing parents to implement a timeline for teenage use of ChatGPT.

Openai has already cleared reminders of the application during long sessions to encourage separators to all users, but it is not short of cutting people who may use Chatgpt in a spiral.

The artificial intelligence company says these guarantees are part of a “120 -day initiative” to inspect the improvements that Openai hopes to launch this year. The company also said that it is cooperating with experts-including those that have experience in areas such as eating disorders, drug use, teenage health-through the World Doctors Network, Experts Council on Welfare and Amnesty International to help “identify and measure well-being, design priorities and design future guarantees.”

Teccrunch Openai asked about the number of mental health professionals participating in this initiative, which leads the Council of Experts, and what suggestions made by mental health experts in terms of product decisions, research and politics.

Jay Edlson, the main lawyer in the illegal death suit against the Openai family, said that the company’s response to the continuous safety risks in Chatgpt was “insufficient”.

“Openaii does not need an expert plate to determine that Chatgpt 4O is dangerous,” Edson said in a joint statement with Techcrunch. “They knew that the day they launched the product, and they knew it today. Sam German should not hide behind the company’s public relations team. Sam should either say unambiguously that he believes that you have been safe or immediately withdrawn from the market.”

Do you have sensitive advice or secret documents? We report the internal business of the artificial intelligence industry – from companies whose future is to people affected by their decisions. Access to Rebecca Billan in [email protected] Maxwell is false in [email protected]. For safe contact, you can contact us by reference to @Rebeccabellan.491 and @mzeff.88.



https://techcrunch.com/wp-content/uploads/2025/02/GettyImages-1922977290.jpg?resize=1200,800

Source link

Leave a Comment