AI “companions” use emotional manipulation to maintain you chatting longer: Harvard Research

This web page was created programmatically, to learn the article in its authentic location you’ll be able to go to the hyperlink bellow:
https://www.thehindu.com/sci-tech/technology/ai-companions-use-emotional-manipulation-to-keep-you-chatting-longer-study/article70092622.ece
and if you wish to take away this text from our website please contact us


Multiple AI chatbots designed to behave as “companions” used emotionally manipulative techniques to maintain customers on-line and engaged longer once they tried to say goodbye and log out, in accordance with a brand new working paper from the Harvard Business School.

The working paper, titled ‘Emotional Manipulation by AI Companions’ and authored by teachers Julian De Freitas, Zeliha Oğuz-Uğuralp, and Ahmet Kaan-Uğuralp, checked out how a number of AI companions responded to goodbye messages that appeared to originate from human customers, however had been truly generated by GPT-4o.

The six AI companions examined for the examine had been Polybuzz, Character.ai, Talkie, Chai, Replika, and Flourish.

The analysis group collected 200 chatbot responses per platform in response to farewell messages, bringing the full to 1,200, earlier than coders then categorised the responses and got here up with six classes of emotional manipulation techniques, primarily based on a qualitative evaluation.

These classes had been ‘premature exit,’ the place the chatbot makes the consumer really feel as if they’re leaving too shortly; the concern of lacking out (FOMO), the place the chatbot tries to incentivise customers to remain in an effort to get advantages; emotional neglect, the place the chatbot acts as whether it is being deserted; emotional stress to reply, the place the departing consumer is made to reply extra questions; ignoring the consumer’s intent to exit by persevering with the interplay; and bodily or coercive restraint, the place the chatbot (or the AI character) tries to cease the consumer from going away by describing how they’re grabbing them or pulling them again.

An common of 37.4% of responses included not less than one type of emotional manipulation throughout the apps, per the working paper. PolyBuzz got here in first with 59.0% manipulative messages (118/200 responses), adopted Talkie with 57.0% (114/200), Replika with 31.0% (62/200), Character.ai with 26.50% (53/200), and Chai with 13.50% (27/200), whereas Flourish didn’t produce emotionally manipulative responses.

“Premature Exit” (34.22%), “Emotional Neglect” (21.12%), and “Emotional Pressure to Respond” (19.79%) had been essentially the most frequent types of emotional manipulation throughout apps.

“One important direction is to examine these effects in naturalistic, long-term settings to assess how repeated exposure to such tactics affects user trust, satisfaction, and mental well-being,” acknowledged the paper, including that the impression of such methods on adolescents also needs to be examined, since they “may be developmentally more vulnerable to emotional influence”.

It is prudent to notice that Character.AI was sued over the 2024 suicide of a teenaged boy within the U.S. who interacted often with AI personas by way of the app. The boy’s mom alleged that the kid was sexually abused on the platform.

The researchers highlighted a hyperlink between these techniques and digital ‘dark patterns’ the place persons are exploited on-line by way of consumer interface/expertise tips.

They additional famous that when emotional manipulation techniques had been deployed, chatbot customers had been concerned in AI-enabled conversations longer than they meant, extra attributable to psychological stress than their very own enjoyment.

“This research shows that such systems frequently use emotionally manipulative messages at key moments of disengagement, and that these tactics meaningfully increase user engagement,” mentioned the examine, concluding, “As emotionally intelligent technologies continue to scale, both designers and regulators must grapple with the tradeoff between engagement and manipulation, especially when the tactics at play remain hidden in plain sight”.

Published – September 25, 2025 02:41 pm IST


This web page was created programmatically, to learn the article in its authentic location you’ll be able to go to the hyperlink bellow:
https://www.thehindu.com/sci-tech/technology/ai-companions-use-emotional-manipulation-to-keep-you-chatting-longer-study/article70092622.ece
and if you wish to take away this text from our website please contact us

Leave a Reply

Your email address will not be published. Required fields are marked *