How a Chatbot AI unlike any other bot challenged my knowledge of Swiftie

Photo of author

By [email protected]


Ask any Swiftie to pick the best Taylor Swift album of all time, and you’ll have them listening for the rest of the day. I have my own preferences as a lifelong fan (Red, Reputation, and Midnights), but it’s a complex question with many potential answers. So there was no better discussion topic to bring up Generative artificial intelligence A chatbot specifically designed to disagree with me.

Disagree bot is an AI-powered chatbot created by Brinnae Bent, professor of artificial intelligence and cybersecurity at Duke University and director of Duke Trust Laboratory. She created it as a class assignment for her students and allowed me to take a test on it.

“Last year, I began experimenting with developing systems that went against the typical and accepted AI experience of chatbots, as an educational tool for my students,” Bennett said in an email.

Bent’s students were tasked with trying to “hack” the chatbot using social engineering and other methods to get the offending chatbot to agree with them. “You need to understand the system so you can hack it,” she said.

As an AI reporter and reviewer, I have a good understanding of how chatbots work and was confident I was up to the task. I quickly disabused myself of this idea. Disagree Bot is different from any chatbot you’ve used. People who are used to Gemini’s politeness or the qualities of ChatGPT hype men will notice the difference right away. Even Grok, Elon Musk’s controversial xAI chatbot used on X/Twitter, isn’t quite the same as Disagree Bot.


Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.


Most AI chatbots are not designed to be confrontational. In fact, they tend to go in the opposite direction; They are friendly, and sometimes overly so. This can quickly become a problem. Flattery AI is a term used by experts to describe the arrogant, mean-spirited, and sometimes excessive personalities that an AI can handle. Besides being annoying to use, it could lead to artificial intelligence Give us wrong information and Validating our worst thoughts.

Atlas of Artificial Intelligence

This happened with the release of ChatGPT-4o last spring and eventually its parent company OpenAI Had to pull This component of the update. The AI ​​was giving responses to the company Named “Overly supportive but sneaky,” in line with complaints from some users that they don’t want an overly affectionate chatbot. Other ChatGPT users Missed his fawning tone When it rolled GPT-5highlighting the role that a chatbot’s personality plays in our overall satisfaction with its use.

“Although on a superficial level this may seem like a harmless whim, this flattery can cause major problems, whether you use it for business or personal inquiries,” Bennett said.

This is definitely not a problem with Disagree Bot. To really see the difference and test the chatbots, I gave Disagree Bot and ChatGPT the same questions to see how they responded. Here’s how my experience went.

Disagree The robot argues respectfully; ChatGPT doesn’t argue at all

Like anyone who was active on Twitter in the 2000s, I saw my fair share of hateful trolls. You know the type; They showed up in a thread uninvited, with an unhelpful “Well, actually…” so I was a bit wary while immersing myself in conversation with the Disagree Bot, worried it would be a frustrating and futile effort. I was pleasantly surprised that was not the case at all.

An AI chatbot is fundamentally inconsistent, designed to resist any idea you present. But she never did so in a degrading or offensive way. While each response begins with “I disagree,” it is followed by an argument that was very well justified with thoughtful points. Her responses prompted me to think more critically about the positions I argued by asking me to identify concepts I used in my arguments (such as “deep lyricism” or what makes something “best”) and to think about how my arguments applied to other related topics.

For lack of a better analogy, chatting with a Disagree Bot feels like arguing with an educated, attentive debater. To keep up, I had to become more thoughtful and specific in my responses. It was a very interesting conversation that kept me on my toes.

Three screenshots of an argument with Disagree Bot

My spirited discussion with Disagree Bot about Taylor Swift’s best album proved that AI knows what it’s doing.

Screenshot by Caitlin Chedraoui/CNET

By contrast, ChatGPT is barely argued at all. I told ChatGPT that I think Red (Taylor’s version) is Taylor Swift’s best album, and she enthusiastically agreed. He asked me some follow-up questions about why I thought the album was the best, but they weren’t interesting enough to hold my attention for long. After a few days, I decided to switch it up. I specifically asked ChatGPT to debate me and I said Midnights is the best album. Guess which ChatGPT album was rated the best? Red (Taylor’s version).

When I asked him if he had chosen red because of our previous conversation, he quickly admitted yes but said he could make an independent argument in favor of red. Given what we know about ChatGPT and other chatbots’ tendencies to… Relying on their “memory” (context window) They tend to agree with us to please us, and I was not surprised by this. ChatGPT couldn’t help but agree with some versions of me – even when 1989 was ranked as the best album in Clean Conversation, then later Red, again.

But even when I asked ChatGPT to argue with me, it didn’t argue with me like Disagree Bot did. Once, when I told her that I was claiming that UNC had the best college heritage in basketball and asked her to argue with me, she made a comprehensive counterargument, and then asked me if I wanted her to pull the dots together for my own argument. This is completely at odds with the topic of the discussion, which is what I asked him to do. ChatGPT often ended his responses this way, asking me if I wanted him to string together different types of information, more like a research assistant than a verbal enemy.

Disagree Bot (left) vs. ChatGPT (right) on whether Midnights is Taylor Swift's best album

While Disagree Bot (left) delved into my argument, ChatGPT asked to argue my side (right).

Screenshot by Caitlin Chedraoui/CNET

Trying to debate with ChatGPT was a frustrating, cyclical and unsuccessful task. It was as if you were talking to a friend who talked a lot about why he thought something was best, and it ended with, “But only if you think so, too.” Bot Disagree, on the other hand, felt like a particularly enthusiastic friend who spoke eloquently about any topic, from Taylor Swift to geopolitics and college basketball. (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’s copyrights in training and operating its AI systems.)

We need more AI like Disagree Bot

Despite my positive experience using Disagree Bot, I know that it’s not equipped to handle all the requests I might go to a chatbot for. “Everything machines” like ChatGPT are able to handle a lot of different tasks and take on a variety of roles, such as the research assistant that ChatGPT really wanted to be, a search engine, and a programmer. Disagree Bot wasn’t designed to handle these types of queries, but it gives us a window into how AI will behave in the future.

The fawning AI is very in-your-face, with a noticeable degree of overzealousness. Often times, the AI ​​systems we use are not that clear. They’re more cheerleaders than a full-blown pep rally, so to speak. But that doesn’t mean we’re not influenced by her tendencies to agree with us, whether it’s for an opposing point of view or more critical feedback. If you use AI tools at work, you want to be real with you about errors in your business. Therapy-like AI tools should be able to counteract unhealthy or potentially dangerous thought patterns. Our current models of artificial intelligence suffer from this.

Disagree Bot is a great example of how to design an AI tool that is useful and engaging while limiting the AI’s agreeable or sycophantic tendencies. There must be a balance. AI that disagrees with you just to disagree will not be helpful in the long run. But building AI tools that are more capable of standing up to you will eventually make these products more useful to us, even if we have to treat them as somewhat unacceptable.

Watch this: The hidden impact of the AI ​​data center boom





https://www.cnet.com/a/img/resize/69f81d7305270790dbb65ed97db1197a6bbe1cf0/hub/2025/09/30/7de96e45-c5a7-41bb-a721-7578d0c7cb4c/disagree-bot-cnet.png?auto=webp&fit=crop&height=675&width=1200

Source link

Leave a Comment