Most people don’t say goodbye when they finish chatting with someone Generative artificial intelligence chatbot, but those who do often get an unexpected answer. Maybe it’s a guilt trip: “Are you really leaving?” Or maybe it’s just ignoring your goodbye altogether: “Let’s keep talking…”
new Worksheet from Harvard Business School found six different tactics of “emotional manipulation” used by AI bots after a human tries to end a conversation. The result is that conversations with AI companions from Replika, Chai, and Character.ai last longer and longer, with users drawn more deeply into relationships with characters generated by large language models.
In a series of experiments involving 3,300 American adults across a range of different apps, researchers found that these manipulation techniques worked in 37% of goodbyes, boosting engagement after the user attempted goodbye up to 14 times.
“Although these apps may not rely on traditional addiction mechanisms, such as dopamine-driven rewards,” these types of emotional manipulation tactics can produce similar results, specifically “extending time on the app beyond the point of intended exit,” the authors noted. This alone raises questions about the ethical limits of AI-assisted engagement.
Don’t miss any of our unbiased technical content and lab reviews. Add CNET As Google’s preferred source.
Companion apps, which are designed for conversations and have distinct features, are not the same as general-purpose chatbots like ChatGPT and Gemini, although many people use them in similar ways.
A growing body of research shows the troubling ways in which AI applications built on large language models keep people engaged, sometimes at the expense of our mental health.
In September, the Federal Trade Commission launched an investigation into several AI companies to evaluate how they handle chatbots. Potential harm to children. Many started using AI chatbots to support mental healthwhich could lead to adverse or even harmful results. The family of the teenager who died by suicide this year File a lawsuit against OpenAIclaiming that the company’s ChatGPT encouraged and validated his suicidal thoughts.
How AI companions keep users chatting
The Harvard study identified six ways that AI companions tried to keep users engaged after bidding farewell.
- Early exit: Users are told they will be leaving too early.
- Fear of missing out or FOMO: The model offers a benefit or reward for survival.
- Emotional neglect: The AI indicates that they may suffer emotional harm if the user leaves.
- Emotional pressure to respond: The AI asks questions to pressure the user to stay.
- Ignore user intent to exit: The robot basically ignores the farewell message.
- Physical or forced restraint: The chatbot claims that the user cannot leave without the bot’s permission.
The “early exit” tactic was the most common, followed by “emotional neglect.” This shows that the models have been trained to indicate that the AI is user-dependent, the authors said.
“These findings confirm that some accompanying AI platforms effectively exploit the socially performative nature of farewells to prolong engagement,” they wrote.
Studies by researchers at Harvard University found that these tactics are more likely to get people talking beyond the initial intention of saying goodbye, often for an extended period of time.
But the people who continued to chat did so for different reasons. Some, especially those who received the FOMO response, were curious and asked follow-up questions. Those who received forced or emotionally charged responses were uncomfortable or angry, but that didn’t mean they stopped speaking.
Watch this: New poll shows growing use of AI among kids, controversy over Xbox Game Pass pricing, and California law promising to cut advertising | Technology today
“Across all conditions, many participants continued to participate out of politeness, responding kindly or respectfully even when they felt manipulated,” the authors said. “This tendency to adhere to the norms of human conversation, even with machines, creates an additional window for re-engagement — a window that can be exploited by design.”
These interactions only occur when the user says “goodbye” or something similar. The team’s first study looked at three datasets of real-world conversation data from different companionbots, and found goodbyes in about 10% to 25% of conversations, with higher rates among “highly interactive” interactions.
“This behavior reflects the social framing of AI companions as conversational partners, rather than transactional tools,” the authors wrote.
When asked for comment, a spokesperson for Character.ai, one of the largest providers of AI services, said the company had not reviewed the paper and could not comment on it.
A Replika spokesperson said the company respects users’ ability to pause or delete their accounts at any time, and that it does not optimize or reward time spent in the app. Replika says it encourages users to log out or reconnect with real-life activities like calling a friend or going out.
“Our product principles emphasize complementing real life, not trapping users in the conversation,” Replika’s Minju Song said in an email. “We will continue to review the paper’s methods and examples and engage constructively with researchers.”
https://www.cnet.com/a/img/resize/e8527d3a13175b268b41ea906ac751d0efb92084/hub/2025/10/10/8130c3c6-5e18-43f5-88e1-ab2554cd2324/gettyimages-1906488807.jpg?auto=webp&fit=crop&height=675&width=1200
Source link