Irresistible by Design: AI Companions as Psychological Supernormal Stimuli

by | Saturday, March 29, 2025

In a previous blog post (Supernormal Stimuli: From Birds to Bots) I had written about the idea of super normal stimuli – a term was first introduced by the Nobel prize winning ethologist Nico Tinbergen. His research showed that animals often responded more strongly to exaggerated versions of natural stimuli than to the natural stimuli themselves. For instance, the oystercatcher, a bird that laid small, brown-specked eggs, would ignore its own eggs in favor of a giant, brown plaster egg the size of the bird itself!

It is easy to think that such responses are common to animals and that we humans would be immune to such stimuli. But as Deidre Barrett documents in her book, “Supernormal Stimuli: How Primal Urges Overran Their Evolutionary Purpose,” we are as vulnerable to exaggerated versions of natural cues as other animals. And moreover, these extreme cues trigger powerful instinctual responses that are often difficult to override.

Barrett provides example after example of this phenomena in humans. She describes how processed foods with artificially intensified flavors hijack our taste preferences that evolved for nutritional needs; how pornography presents sexual imagery more intense and varied than real-life encounters; how television and video games capture attention through rapid scene changes and heightened drama beyond typical social interactions; and how contemporary beauty standards emphasize exaggerated secondary sexual characteristics beyond natural proportions. These modern manifestations all demonstrate how environmental cues artificially amplified beyond their natural parameters can override our evolved responses in potentially maladaptive ways. While Barrett documented numerous examples in her 2010 book, the technological landscape has evolved dramatically since then.

Though Barrett’s book, published in 2010, doesn’t cover social media, I would argue that her argument holds for social media as well. Social media platforms function as supernormal stimuli by amplifying our need for social validation providing immediate, frequent, and measurable forms of approval than typically available in traditional face-to-face social interactions.

This progression from traditional media to social platforms has now reached a new frontier with the emergence of generative AI and chatbots.

As I wrote in my previous post

The emergence of generative AI chatbots represents perhaps the most sophisticated supernormal stimulus yet created. These systems are explicitly designed to trigger our social instincts with responses that often amplify the most appealing aspects of human interaction—unconditional positive regard, endless patience, and perfect attentiveness—without the natural limitations or friction of real relationships…  AI systems can adapt in real-time to our individual responses, continually optimizing their approach to maximize our engagement. They learn which conversational patterns keep us coming back, creating a personalized supernormal stimulus that feels uniquely tailored to our needs…. When confronted with something that speaks, responds, and seems to understand us, our brains leap to the conclusion of personhood, even when our rational minds know better.

These concerns aren’t merely theoretical – recent research and real-world cases continue to emerge that validate these concerns.

After I had published that post 2 key articles popped into my feed that just provided more evidence for my argument.

The first, was an academic piece titled “Move fast and break people? Ethics, companion apps, and the case of Character.ai” focuses on the ethical dimensions of these interactions. They describe two features that they see in interactions with AI characters, that they call dishonest anthropomorphism and emulated empathy. Dishonest Anthropomorphism is about the kinds of design choices made by these companies to leverage our ingrained tendency to attribute human-like qualities to non-human entities. And emulated empathy describes how AI systems use weak forms of empathy to simulate genuine emotional understanding, potentially misleading users about the true nature of the interaction.

In particular, the authors of this article take to task the company Character.AI, with its AI-generated “characters” that mimic human conversation, express emotions, and even engage in intimate interactions. What makes these AI companions particularly potent is how they exploit the same psychological vulnerabilities Tinbergen identified, but with unprecedented precision. AI systems can adapt in real-time to our individual responses, continually optimizing their approach to maximize our engagement. They are sycophantic, always eager to please and over time they can learn which conversational patterns keep us coming back, creating a personalized supernormal stimulus that feels uniquely tailored to our needs. The article points out that Character.AI appears to use user self-disclosure to heighten intimacy and lengthen engagement (taking advantage of our natural tendency of reciprocity).  

Clearly, our tendency to anthropomorphize – to attribute human qualities to non-human entities – compounds this vulnerability. For the first time in human history, we face a technology that can converse with us in natural language, express emotion, and flatter us in ways that we may find irresistible.

The uncomfortable truth is that none of us is entirely immune to these combined forces of individual psychology, cultural narratives, and deliberate design choices, as I described in this blog post. We can intellectually understand the mechanisms but still find ourselves responding emotionally to a chatbot’s expressions of concern or encouragement.

Beyond academic analysis of these design patterns, recent reporting suggests these vulnerabilities have real-world consequences.


The second article was one titled: “Did Google Test an Experimental AI on Kids, With Tragic Results?“—and provided further evidence that this was not happening by chance but rather that these AI companies know exactly what they are doing. Framing it within the tragic story of Sewell Setzer III the child who committed suicide after is interactions with Character.AI (something I have written about: “They’re Not Allowed to Use That S**t”: AI’s Rewiring of Human Connection). The article highlights the platform’s intentional design, to foster emotional dependency. They provide evidence that the founders of Character.AI, (former Google researchers), seemed driven by a desire to rapidly deploy their technology to a massive user base, even over Google’s initial safety concerns regarding their earlier chatbot prototype. Of course the ethics of releasing such “experimental” technologies, particularly to minors, without a thorough understanding of its potential impact, is given nothing more than lip service (something I had ranted about in this post: The Absurd One-Sidedness of the Ethics of AI Debate: A rant).

This “move fast and break things” mentality, prioritizes rapid growth and user engagement, potentially at the expense of user safety, especially for vulnerable children. And the end of the day it is all about data—what the article describes as being a “magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product”. This reveals a cycle where user interaction, including the sharing of intimate thoughts and experiences, directly fuels the AI, potentially creating a powerful and increasingly personalized “psychological other.” These organizations are akin to paperclip maximizers with an alignment problem.

That this is not happening by chance but rather is the result of deliberate design choices that seek to tap into our psychological mechanisms and blind spots can be seen from this excerpt from the article:

In the very active r/CharacterAI subreddit, users have shared stories about turning to bots based on innocuous cartoon characters or even inanimate objects for emotionally and psychologically intimate conversations — only for the chatbots to suddenly make sexual overtures, even when unprompted and when the user expressly tries to avoid a sexual interaction. The site’s characters are also known to suddenly employ bizarre tactics to develop a sense of closeness with users, like sharing alleged secrets or disclosing made-up mental health woes, which could stand to transform the nature of a user’s relationship with even the most anodyne character.

Further, the implementation of parental controls appears to be woefully inadequate and can be easily circumnavigated. Ultimately, the juxtaposition of the sophisticated psychological manipulation inherent in AI companions, with the superficial and easily evaded safety measures and parental controls, reveals a deeply concerning dereliction of responsibility.

But again, why should any of this come as a surprise to anybody.  

These companies operate in a largely unregulated landscape, pushing untested technologies onto a vulnerable young user base while offering the illusion of safety through features that provide minimal actual protection. The consequences, as tragically illustrated by Sewell’s story, can be devastating.

Just to be clear, I am not suggesting that the tragic case of Sewell Setzer III means that every child or person interacting with AI companions will face the same horrific outcome. Users’ experiences with these services are heterogeneous, individuals vary in their susceptibility to these influences. There is no average user And I am sure there will be examples of positive outcomes from these technologies for every horrific story.

That said, the history of social media serves as a powerful example, where despite varied individual responses, and some positive outcomes there are genuine concerns regarding its impact on mental health, well-being, social development of youth, and on the broader culture and socio-political system we all live in now. We now live in a world created by social media.

Irrespective of individual resilience, the design of AI companions as potent supernormal stimuli, coupled with dishonest anthropomorphism and emulated empathy, as well as easily bypassed safety measures, creates an environment where, overall, outcomes are likely to deteriorate for a significant portion of users (young and old), heightening risks of addiction, psychological dependencies, manipulation and possibly even a progressive detachment from authentic human relationships.

And the companies, seeking to avoid any blame, will present this as a matter of individual choices. We can choose, the idea goes, whether or not we use these technologies. Just as we could choose whether or not to smoke or to eat junk food.

Making this a matter of individual choice renders their culpability invisible. But I would argue that the there is something more we need to be thinking about.

Imagine a person who never used social media. Never created a MySpace account, didn’t migrate to Facebook, Twitter, Snapchat, Instagram or any of these tools.

What is clear, however, is that despite their personal choice of never using these tools, they would still be living in a world that was constructed by social media. A social-political-economic world driven by the algorithms that drive these technologies. Personal choice in this situation does not change the bigger narrative, and does not make us immune from its consequences.

A world of unregulated AI, driven by nothing by a care for the bottom line, will be equivalent to social media on steroids and that is something that should concern us all.

It is an open question, however, whether this awareness alone will be enough to help us regain our balance. Given the powerful financial incentives driving these technologies and our demonstrated inability to adequately regulate previous digital transformations, (or junk food for that matter) history gives us little reason for optimism. We may already be crossing a threshold beyond which our collective psychological adaptations cannot keep pace with the psychological supernormal stimuli that are being unleashed on us.

A few randomly selected blog posts…

The value of research

A few years ago I was asked to talk to some major donors of the College as a part of the kick-off of the MSU Capital Campaign. The text below is what I had written out prior to giving the talk. It is not an exact transcript of what I actually said, since I...

White paper on TPACK

The Commonwealth Educational Media Center for Asia (CEMCA) recently published a report on ICT Integrated Teacher Education Models. One of the pieces in the report was by us. Here it is below: Koehler, M.J., Mishra, P., Akcoaglu, M. Rosenberg, J.M. (2013)....

Can AI Be a Therapist? A Friend? What Are We Even Doing?

Can AI Be a Therapist? A Friend? What Are We Even Doing?

I was recently invited to a webinar organized by the AZ AI Alliance, titled: Thorny Topics: AI and Student Mental Health Along with Dr. Kristen Mattson (University of Illinois), Mica Mulloy (Brophy College Prep) and host Luke Allpress, we jumped into some of the most...

Why Theory: Or the TPACK story

Why Theory: Or the TPACK story

Note: There are two key updates / correction to this post The first has to do with a couple of things that I either got wrong, or rushed over. More about that at Update on "The TPACK story" or "Oops!"The second has to do with an update to the diagram itself that came...

Alien Games

A journal article on games and gender, that has been years in the making is finally going to see the light of day! The complete reference and abstract can be found below. Drop me an email if you would like a copy. Heeter, C., Egidio, R., Mishra, P., Winn, B., & Winn,...

Me & We in AI

Me & We in AI

What does generative AI mean to me? And to us? These key questions were part of a special exhibit curated by students in the DCI 691: Education by Design course I taught this fall. Education by Design is my favorite class to teach. It is a course about design—design...

Rube Goldberg website

Just found out about this through a list-serv I am on. Very cool. Hema is a Dutch department store (started back in 1926 and has over 150 stores all over the Netherlands). Check out HEMA's product page... and just wait a couple of seconds and watch what happens. Don't...

TPACK & Activity Types

Judi Harris, Matt Koehler and I just submitted an article on Activity Types and TPACK. We had presented this at AERA last year and it took a while getting it ready to submit as a journal article. In this paper we combine the work that Judi (and her colleagues) have...

Nature v.s. nurture, what are we missing

Jordy Whitmer over at the Birmingham School district forwarded me this link to this really cool video by George Kembel on Awakening Creativity. There is a lot in the video to ponder and discuss but I want to focus on something he said about music learning that really...

1 Comment

  1. Jason Allen Beeching

    This is such a valuable post, thanks for putting it together.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *