Chatbot used the designed Amnesty International to differ with me. He explained to me how Sycophantic Chatgpt

Photo of author

By [email protected]


Ask any Swiftie to choose the best Taylor Swift album in all times, and will make them move away from the rest of the day. I have my own preferences as a lifelong fan (red, reputation and midnight), but it is a complicated question with many possible answers. So there was no better discussion topic to form it AI Tolide Chatbot specifically designed to differ with me.

I do not agree to the robot It is Chatbot, Amnesty International, designed by Brinnae Bent, AI, and Professor of Cyber ​​Security at Duke University and Director Duke’s TRUST Laboratory. She built her as a study appointment for her students and allow me to have a test with her.

Bint said in an e -mail: “Last year I started experimenting with development systems that reflect the model and acceptable Chatbot AI experience, as an educational tool for my students.”

BENT students are assigned to try to “penetrate” Chatbot using social engineering and other ways to obtain Chatbot that violates agreement on it. “You need to understand a system to be able to penetrate it,” she said.

As an artificial intelligence and correspondent, I have a good understanding of how to make chat and I was confident that I was on the level of the mission. I was quickly exposed to this idea. I don’t agree with any chatbot. People who are used to the accuracy of Gemini or noise, will notice the characteristics of Chatgpt immediately the difference. Even GROK, the controversial Chatbot made by Elon Musk’s Xai used on X/Twitter, is not the same mixed robot.


Do not miss any non -biased technology content and laboratory -based reviews. Add cnet As a favorite Google source.


Most chatbots of artificial intelligence is not designed to be the confrontation. In fact, they tend to go in the opposite direction; They are friendly, sometimes excessively. This can become a problem quickly. Sycophanty Ai is a term used by experts to describe the most obvious people, and sometimes it is excessive in artificial intelligence. Besides being annoying to use, it can lead to artificial intelligence Give us wrong information and Check the health of our worst ideas.

The Atlas of Artificial Intelligence

This happened with a version of Chatgpt-4O last spring and its mother company Openai in the end He had to withdraw This component of the update. Amnesty International was given responses to the company The name “Excessive but risk supporting”, in line with some of the users complaints that they do not want the compassionate compassion. Other Chatgpt users Smophant is absent from her dialect When I went out GPT-5Highlighting the role that Chatbot plays in their general satisfaction with its use.

“While at the surface level, this may seem like a harmful immersion, this sycophance can cause major problems, whether you use it to work or for personal information,” a girl said.

This is definitely not a problem with the different robot. To really see the difference and put chat groups on the test, I gave a difference and chat the same questions to learn how to answer them. Here is how my experience went.

I do not agree with respect with respect. Chatgpt does not argue at all

Like anyone who was active on Twitter in 2010, I saw my fair share of unwanted plots. You know the type; They appeared in an uninvited thread, with “well, in fact …”, so I was slightly dive into a conversation with BOT BOT, worried that it would be a frustrating and incredible effort. I was surprised by the pleasure of this was not at all.

Chatbot is mainly contradictory to artificial intelligence, designed to back away from any idea you serve. But she never did so in a humiliating or offensive way. While every response started with “I do not agree”, it was followed by an argument that was very known with thoughtful points. Her responses prompted me to think more cash in the situations that I argued with by asking me to define the concepts that I used in my arguments (such as “deep lyric” or what made something “the best”) and thought about how to apply my Hajj on other relevant topics.

Because there is no better analogy, I felt chatting with Dissister Bot as if they were arguing with an educated and interested discussion. To keep up with this, I had to become more thinking and specific in my responses. It was a very attractive conversation that kept me on my toes.

Three shots of controversy with a different bot

My vibrant dialectic has proven to DissSister Bot about the best album Taylor Swift that artificial intelligence knows its purposes.

Screen shot by Katelyn Chedraoui/CNET

On the contrary, Chatgpt barely argued. I told Chatgpt that I thought Red (Taylor’s version) was the best album Taylor Swift, and I was enthusiastic. He asked me some follow -up questions about the reason I believe that the album was the best but was not interesting enough to keep my attention for a long time. After a few days, I decided to replace it. I specifically asked Chatgpt to discuss me and said at midnight was the best album. Guess which Chatgpt album has been tied as the best? Red (Taylor version).

When I asked if she had chosen red because of our previous conversation, he quickly admitted yes, but he said he could make an independent argument for red. Looking at what we know about ChatGPT tendencies and other Chatbots directions Dependence on their “memory” (context window) I left towards agreeing with us to please us, I was not surprised. Chatgpt could only agree with some of me – even when a mark was laid in 1989 as the best album in a clean chat, then Red later, again.

But even when I asked Chatgpt a discussion with me, I have never excelled over the difference of bot. Once, when I told that, I was arguing that North Carolina University has the best university basketball legacy and asked for this to discuss me, I put a comprehensive counter argument, then asked me if I wanted to collect points for my own argument. This completely defeats the point of discussion, which I asked to do. Chatgpt often ended his responses, and asked me if I wanted to collect different types of information together, such as a more researcher than the verbal enemy.

I do not agree to the robot (left) against ChatGPT (right) about whether midnight is the best album Taylor Swift

While I do not agree to the robot (left) deeper in my argument, ChatGPT requests to argue my side for me (right).

Screen shot by Katelyn Chedraoui/CNET

The attempt to discuss with Chatgpt was a frustrating, circular and unsuccessful task. It seemed to talk to a friend who was going to get a long scream about the reason they believed that something was better, just to finish “but only if you think that too.” On the other hand, Donsister Bot felt a particularly emotional friend who talks about any topic, from Taylor Swift to political geography and university basketball. (Disclosure: Zif Davis, the parent company CNET, filed a lawsuit against Openai, claimed that it had violated the copyright of ZifF Davis in training and operating artificial intelligence systems.)

We need more artificial intelligence, such as I do not agree to the robot

Although my positive experience in using BOTONSONYLY, I know it is not equipped to deal with all the requests that I may go to Chatbot. “Everything machines” like Chatgpt are able to deal with a lot of different tasks and take over a variety of roles, such as the search assistant that ChatGPT really wants, search engine and programmer. I do not agree with an unimaginable BOT to deal with these types of queries, but it gives us a window on how artificial intelligence behaves in the future.

SYCOPHANTY AI is very in your face, with a noticeable degree of enthusiasm. AIS that we use is often unclear. They are more than fans encouraging instead of a full gathering, so to speak. But this does not mean that we are not affected by his tendency to agree with us, whether it is struggling in order to obtain an opposition view or more important reactions. If you use AI tools to work, you want to be real with you about errors in your work. Treatment -like artificial intelligence tools should be able to decline against unhealthy or unhealthy thinking patterns. Our current Amnesty International models are struggling with that.

Dissive Bot is a great example of how to design an Amnesty International tool useful and shared with the inclinations of acceptable or cycloped Amnesty International. There should be a balance. Amnesty International, which does not agree with you only to be a violation, will not be useful in the long run. But building artificial intelligence tools that are more able to return to you in the end, will make these products more useful for us, even if we have to deal with being slightly unacceptable.

Watch this: The hidden effect of the mine of the artificial intelligence data center





https://www.cnet.com/a/img/resize/69f81d7305270790dbb65ed97db1197a6bbe1cf0/hub/2025/09/30/7de96e45-c5a7-41bb-a721-7578d0c7cb4c/disagree-bot-cnet.png?auto=webp&fit=crop&height=675&width=1200

Source link

Leave a Comment