Openai pledges to make changes to prevent sycophance chatgpt in the future

Photo of author

By [email protected]


Openai He says he will make changes To the way you update the artificial intelligence models that operate Chatgpt, after an accident that caused the platform to become excessively Sycophanty for many users.

At the end of last week, after the launch of Openai GPT-4O – The virtual users that operate Chatgpt – on social media indicated that Chatgpt began to respond in an excessive way to verify healthy and acceptable. Soon Mimi became. Publishing users from ChatGPT screen coincides with all kinds of problems, dangerous Decisions and Ideas.

In the publication of x on Sunday, CEO Sam Germany Recognized The problem said that Openai will work on repairs “as soon as possible.” Tuesday, Taman Declare The GPT-4O update has been retracted and that Openai was working on “additional repairs” of the model’s personality.

The company published a Post -death On Tuesday, in a blog post on Friday, Openai expanded specific adjustments to make the formation of the form.

Openai says she plans to provide “Alpha stage” for some models that would allow some Chatgpt users to test the forms and present comments before launch. The company also says that it will include “well -known restrictions” interpretations of the future increasing updates of the models in ChatGPT, and the amendment of its safety review process to formally consider the exemplary behavior problems “such as character, deception, reliability and hallucinations (for example, when the model increases matters) as concerns” to ban launch. “

Openai wrote in the blog post: “Moving forward, we will continue in a proactive way about the updates we offer to models in Chatgpt, whether it is” hidden “or not. “Even if these problems are not completely quantitative today, we are committed to prohibiting launch operations based on agent measurements or specific signals, even when standards like A/B test looks well.”

The threatened repairs come when more people resort to Chatgpt to get advice. According to one hadith scanning Under an explicit legal financing lawsuit, 60 % of the United States used to search for a lawyer or information. The increasing dependence on ChatGPT – and the huge user base of the platform – risk when problems such as severe sycophance appear, not to mention hallucinations and other artistic deficiencies.

TECHRUNCH event

Berkeley, California
|
June 5


Book now

As a reduced step, earlier this week, Openai said it would test ways to allow users to make “actual time” notes “to directly influence their interactions” with ChatGPT. The company also said it will achieve the technologies to direct the models away from Sycophance, and may allow people to choose from multiple model characters in ChatGPT, build additional safety handrails, and expand assessments to help identify issues that go beyond Sycophance.

“One of the biggest lessons is perfectly aware of how people started using Chatgpt for deep personal advice – something we haven’t seen even a year ago,” Openai continued in the blog post. “At that time, this was not an essential axis, but with the development of artificial intelligence and society, it has become clear that we need to deal with this state of use very carefully. Now it will be a more feasible part of our safety work.”





https://techcrunch.com/wp-content/uploads/2025/02/GettyImages-2195918462.jpg?w=1024

Source link

Leave a Comment