Irresistible by Design: AI Companions as Psychological Supernormal Stimuli

by | Saturday, March 29, 2025

In a previous blog post (Supernormal Stimuli: From Birds to Bots) I had written about the idea of super normal stimuli – a term was first introduced by the Nobel prize winning ethologist Nico Tinbergen. His research showed that animals often responded more strongly to exaggerated versions of natural stimuli than to the natural stimuli themselves. For instance, the oystercatcher, a bird that laid small, brown-specked eggs, would ignore its own eggs in favor of a giant, brown plaster egg the size of the bird itself!

It is easy to think that such responses are common to animals and that we humans would be immune to such stimuli. But as Deidre Barrett documents in her book, “Supernormal Stimuli: How Primal Urges Overran Their Evolutionary Purpose,” we are as vulnerable to exaggerated versions of natural cues as other animals. And moreover, these extreme cues trigger powerful instinctual responses that are often difficult to override.

Barrett provides example after example of this phenomena in humans. She describes how processed foods with artificially intensified flavors hijack our taste preferences that evolved for nutritional needs; how pornography presents sexual imagery more intense and varied than real-life encounters; how television and video games capture attention through rapid scene changes and heightened drama beyond typical social interactions; and how contemporary beauty standards emphasize exaggerated secondary sexual characteristics beyond natural proportions. These modern manifestations all demonstrate how environmental cues artificially amplified beyond their natural parameters can override our evolved responses in potentially maladaptive ways. While Barrett documented numerous examples in her 2010 book, the technological landscape has evolved dramatically since then.

Though Barrett’s book, published in 2010, doesn’t cover social media, I would argue that her argument holds for social media as well. Social media platforms function as supernormal stimuli by amplifying our need for social validation providing immediate, frequent, and measurable forms of approval than typically available in traditional face-to-face social interactions.

This progression from traditional media to social platforms has now reached a new frontier with the emergence of generative AI and chatbots.

As I wrote in my previous post

The emergence of generative AI chatbots represents perhaps the most sophisticated supernormal stimulus yet created. These systems are explicitly designed to trigger our social instincts with responses that often amplify the most appealing aspects of human interaction—unconditional positive regard, endless patience, and perfect attentiveness—without the natural limitations or friction of real relationships…  AI systems can adapt in real-time to our individual responses, continually optimizing their approach to maximize our engagement. They learn which conversational patterns keep us coming back, creating a personalized supernormal stimulus that feels uniquely tailored to our needs…. When confronted with something that speaks, responds, and seems to understand us, our brains leap to the conclusion of personhood, even when our rational minds know better.

These concerns aren’t merely theoretical – recent research and real-world cases continue to emerge that validate these concerns.

After I had published that post 2 key articles popped into my feed that just provided more evidence for my argument.

The first, was an academic piece titled “Move fast and break people? Ethics, companion apps, and the case of Character.ai” focuses on the ethical dimensions of these interactions. They describe two features that they see in interactions with AI characters, that they call dishonest anthropomorphism and emulated empathy. Dishonest Anthropomorphism is about the kinds of design choices made by these companies to leverage our ingrained tendency to attribute human-like qualities to non-human entities. And emulated empathy describes how AI systems use weak forms of empathy to simulate genuine emotional understanding, potentially misleading users about the true nature of the interaction.

In particular, the authors of this article take to task the company Character.AI, with its AI-generated “characters” that mimic human conversation, express emotions, and even engage in intimate interactions. What makes these AI companions particularly potent is how they exploit the same psychological vulnerabilities Tinbergen identified, but with unprecedented precision. AI systems can adapt in real-time to our individual responses, continually optimizing their approach to maximize our engagement. They are sycophantic, always eager to please and over time they can learn which conversational patterns keep us coming back, creating a personalized supernormal stimulus that feels uniquely tailored to our needs. The article points out that Character.AI appears to use user self-disclosure to heighten intimacy and lengthen engagement (taking advantage of our natural tendency of reciprocity).  

Clearly, our tendency to anthropomorphize – to attribute human qualities to non-human entities – compounds this vulnerability. For the first time in human history, we face a technology that can converse with us in natural language, express emotion, and flatter us in ways that we may find irresistible.

The uncomfortable truth is that none of us is entirely immune to these combined forces of individual psychology, cultural narratives, and deliberate design choices, as I described in this blog post. We can intellectually understand the mechanisms but still find ourselves responding emotionally to a chatbot’s expressions of concern or encouragement.

Beyond academic analysis of these design patterns, recent reporting suggests these vulnerabilities have real-world consequences.


The second article was one titled: “Did Google Test an Experimental AI on Kids, With Tragic Results?“—and provided further evidence that this was not happening by chance but rather that these AI companies know exactly what they are doing. Framing it within the tragic story of Sewell Setzer III the child who committed suicide after is interactions with Character.AI (something I have written about: “They’re Not Allowed to Use That S**t”: AI’s Rewiring of Human Connection). The article highlights the platform’s intentional design, to foster emotional dependency. They provide evidence that the founders of Character.AI, (former Google researchers), seemed driven by a desire to rapidly deploy their technology to a massive user base, even over Google’s initial safety concerns regarding their earlier chatbot prototype. Of course the ethics of releasing such “experimental” technologies, particularly to minors, without a thorough understanding of its potential impact, is given nothing more than lip service (something I had ranted about in this post: The Absurd One-Sidedness of the Ethics of AI Debate: A rant).

This “move fast and break things” mentality, prioritizes rapid growth and user engagement, potentially at the expense of user safety, especially for vulnerable children. And the end of the day it is all about data—what the article describes as being a “magical data feedback loop by connecting user engagement back into their underlying model to continuously improve their product”. This reveals a cycle where user interaction, including the sharing of intimate thoughts and experiences, directly fuels the AI, potentially creating a powerful and increasingly personalized “psychological other.” These organizations are akin to paperclip maximizers with an alignment problem.

That this is not happening by chance but rather is the result of deliberate design choices that seek to tap into our psychological mechanisms and blind spots can be seen from this excerpt from the article:

In the very active r/CharacterAI subreddit, users have shared stories about turning to bots based on innocuous cartoon characters or even inanimate objects for emotionally and psychologically intimate conversations — only for the chatbots to suddenly make sexual overtures, even when unprompted and when the user expressly tries to avoid a sexual interaction. The site’s characters are also known to suddenly employ bizarre tactics to develop a sense of closeness with users, like sharing alleged secrets or disclosing made-up mental health woes, which could stand to transform the nature of a user’s relationship with even the most anodyne character.

Further, the implementation of parental controls appears to be woefully inadequate and can be easily circumnavigated. Ultimately, the juxtaposition of the sophisticated psychological manipulation inherent in AI companions, with the superficial and easily evaded safety measures and parental controls, reveals a deeply concerning dereliction of responsibility.

But again, why should any of this come as a surprise to anybody.  

These companies operate in a largely unregulated landscape, pushing untested technologies onto a vulnerable young user base while offering the illusion of safety through features that provide minimal actual protection. The consequences, as tragically illustrated by Sewell’s story, can be devastating.

Just to be clear, I am not suggesting that the tragic case of Sewell Setzer III means that every child or person interacting with AI companions will face the same horrific outcome. Users’ experiences with these services are heterogeneous, individuals vary in their susceptibility to these influences. There is no average user And I am sure there will be examples of positive outcomes from these technologies for every horrific story.

That said, the history of social media serves as a powerful example, where despite varied individual responses, and some positive outcomes there are genuine concerns regarding its impact on mental health, well-being, social development of youth, and on the broader culture and socio-political system we all live in now. We now live in a world created by social media.

Irrespective of individual resilience, the design of AI companions as potent supernormal stimuli, coupled with dishonest anthropomorphism and emulated empathy, as well as easily bypassed safety measures, creates an environment where, overall, outcomes are likely to deteriorate for a significant portion of users (young and old), heightening risks of addiction, psychological dependencies, manipulation and possibly even a progressive detachment from authentic human relationships.

And the companies, seeking to avoid any blame, will present this as a matter of individual choices. We can choose, the idea goes, whether or not we use these technologies. Just as we could choose whether or not to smoke or to eat junk food.

Making this a matter of individual choice renders their culpability invisible. But I would argue that the there is something more we need to be thinking about.

Imagine a person who never used social media. Never created a MySpace account, didn’t migrate to Facebook, Twitter, Snapchat, Instagram or any of these tools.

What is clear, however, is that despite their personal choice of never using these tools, they would still be living in a world that was constructed by social media. A social-political-economic world driven by the algorithms that drive these technologies. Personal choice in this situation does not change the bigger narrative, and does not make us immune from its consequences.

A world of unregulated AI, driven by nothing by a care for the bottom line, will be equivalent to social media on steroids and that is something that should concern us all.

It is an open question, however, whether this awareness alone will be enough to help us regain our balance. Given the powerful financial incentives driving these technologies and our demonstrated inability to adequately regulate previous digital transformations, (or junk food for that matter) history gives us little reason for optimism. We may already be crossing a threshold beyond which our collective psychological adaptations cannot keep pace with the psychological supernormal stimuli that are being unleashed on us.

Topics related to this post: Essay

A few randomly selected blog posts…

TPACK Newsletter, Issue #38: September 2018

TPACK Newsletter, Issue #38: September 2018

New (tongue-in-cheek) TPACK diagram Judi Harris and her team just shared the latest version of the TPACK newsletter #38. You can find the latest issue here (pdf) and all previous issues are archived here. The growth of work around TPACK never ceases to...

Have a great 2014!

It is that time of the year... the time for the Mishra/Sawai family new year's video. As tradition has it the video needs to be some kind of a typographical animation, typically a play with words that is synchronized to music, and attempts to incorporate a visually...

It Takes Two: A scientific romp using AI

It Takes Two: A scientific romp using AI

Dark 'n' Light is an e-zine that "explores science, nature, social justice and culture, through the arts and humanities." It is a labor of love by a small, dedicated team led by Susan Matthews, former legal and policy wonk, turned editor and podcaster. I came to know...

The art of science

I have always been interested in what lies at the intersection of science and art. There are of course many different ways of looking at this. There is the idea of scientific creativity being both similar to and different from artistic creativity. And then there is...

AI, Human Rights, and Education: A Virtual Panel Discussion

AI, Human Rights, and Education: A Virtual Panel Discussion

I recently participated in a virtual panel organized by the Federation of Post-Secondary Educators of British Columbia (FPSE), examining the intersection of AI: Human rights, and Education. The event brought together five panelists from different institutions and...

Playing with light and shadows

Stumbled upon the creative work of Kumiya Mashita. It is amazing just how much can be created with just light and shadows. Just brilliant. Here are some examples:

From being to becoming: Keynote by Shawn Loescher

From being to becoming: Keynote by Shawn Loescher

It is rarely that I hear a talk that blows me away. We have all seen the TED talks, and their mutant offspring. The over-hyped music and catchy taglines; the speaker in front of a rapt audience; the crafted delivery with its carefully punctuated pauses and reveals,...

Measuring creativity, the sad news!

Chris Fahnoe, just sent me a link to a piece on KQED on measuring creativity. Nothing particularly new here but reading it sent me down a rabbit-hole of some quotes and ideas I had been wanting to blog about for a while. So here goes. All this started when I read a...

1 Comment

  1. Jason Allen Beeching

    This is such a valuable post, thanks for putting it together.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *