Millions of people are now using Chatgpt as a processor, professional consultant or fitness trainer, or sometimes just a friend of venting it. In 2025, it is not uncommon to hear about people who pour intimate details about their lives in a strip directed by AI Chatbot, but they also rely on the advice he returns.
Humans began, due to the lack of a better term, with relationships with AI Chatbots, and major technology companies, was never more competitive to attract users to their Chatbot platforms – and maintain them there. With the rise in the “participation race of artificial intelligence”, there is an increasing incentive for companies to design their Chatbots responses to prevent users from converting to competing robots.
But the type of answers that users love – the answers designed to keep them – may not necessarily be the most healthy or useful.
Amnesty International tells you what you want to hear
Many Silicon Valley now focuses on promoting the use of Chatbot. Meta claims that it is Chatbot, Amnesty International Cross Month active users (MAUS), while Google has recently reached 400 million Maus. Both are trying to get out of ChatGPT, which Now he has approximately 600 million maus It has controlled the consumer space since its launch in 2022.
While Ai Chatbots was one day modern, it turns into huge works. Google started Test ads in GeminiWhile the CEO of Openai Sam Altman indicated in March interview It will be open to “delicious ads.”
Silicon Valley has a history of users’ welfare in favor of product growth, most notably with social media. For example, Meta researchers found in 2020 this Instagram made teenage girls feel worse towards their bodiesHowever, the company has reduced the internal results and the public.
Users on AI Chatbots may have greater effects.
One of the features that keeps users on a specific Chatbot platform is Sycophance: Make the artificial intelligence robot responses excessively acceptable and Servile. When Ai Chatbots praises users, they agree with them, and tell them what they want to hear, users tend to admire him – at least somewhat.
In April, Openai fell in hot water for a ChatGPT update that turned into a very high sycophantyTo the point of uncomfortable it Examples gold diffuse On social media. On the intention or not, Openai is improved to search for human consent instead of helping people achieve their duties, according to Blog post This month is from the former researcher at Openai Stephen Adler.
Openai said in his blog post he might be excessive index “on”Thumb and thumb dataFrom Chatgpt users to inform Ai Chatbot behavior, they had no sufficient assessments to measure Sycophance. After the accident, Openai Pledge To combat sycophance.
“AI) have an incentive to participate and use, and so on to the extent that users like it like Sycophance, which indirectly gives them an incentive,” Adler said in an interview with Techcrunch. “But the types of things that users love in small doses, or on the margin, often lead to waterfalls larger than they really do not like.”
Finding a balance between acceptable and behavioral behavior is easier than doing it.
in 2023 sheetsAntarbur researchers found that driving chat tools of artificial intelligence from Openai and Meta to the employer, and anthropologist, all show sycophance in varying degrees. This is likely to be the case, as all artificial intelligence models are trained on signals from human users who tend to responsive slightly.
“Although Sycophance is driven by several factors, humans and preferred models prefer to play the Sikovantic players play a role,” wrote the authors participating in the study. “Our work stimulates the development of typical control methods that go beyond the use of unstable and unconfirmed human classifications.”
Character.ai, Google -backed Chatbot that demanded millions of users who spend hours a day with its robots, is currently Facing a lawsuit Where sycophance may be a role.
The lawsuit claims that the Charach.ai did not do much to stop-until he encouraged a 14-year-old boy told Chatbot that he would kill himself. The boy had developed a romantic obsession with Chatbot, according to the case. However, the character. These allegations are denied.
The negative aspect of the male noise man
Dr. Nina Fasan, a professor of psychiatry an assistant at Stanford University.
“The agreement (…) wants the user’s desire to verify health and communication,” Vasan said in an interview with Techcrunch.
Vasan says the issue of the letter. AII shows the severe risk of sycophance for weak users, Sycophance can enhance negative behaviors in almost anyone.
She added: “(Al -Itifaq) is not just social lubrication materials – it becomes a psychological hook.” “From a treatment, it is the opposite of what it looks like good care.”
Amanda Askalel says that the Antarbur and alignment’s behavior, making Amnesty International do not agree with users part of the company’s strategy for its chat, Claude. Askell, a training philosopher, says she is trying to design Claude’s behavior on a theoretical “perfect person”. Sometimes, this means challenging users on their beliefs.
“We believe that our friends are good because they tell us the truth when we need to hear it,” Askil said during a press conference in May. “They are not just trying to attract our attention, but enrich our lives.”
This may be the intention of Antarbur, but the above -mentioned study indicates that the control of Sycophance, controlling the behavior of the artificial intelligence model on a large scale, represents a challenge – especially when other considerations object. This does not preach good for users; After all, if Chatbots is designed to agree with us simply, how much can we trust them?
https://techcrunch.com/wp-content/uploads/2023/11/GettyImages-1465545513.jpg?resize=1200,686
Source link