Supernormal Stimuli: From Birds to Bots

by | Friday, March 21, 2025

Picture this: a small bird desperately trying to balance atop an egg so enormous it keeps sliding off, while its own perfectly good eggs lie abandoned nearby. This absurd image has stayed with me since childhood, when I first encountered it in a popular science book on animal behavior.

Why would any bird ignore its own eggs to incubate an impossibly large fake? And what does this have to do with bots and AI?

These questions lead us to the fascinating work of Dutch ethologist, and Nobel laureate, Niko Tinbergen. His research revealed something profound about animal behavior – and ultimately, about our own psychology.

What Tinbergen showed through a series of ingenious, often surprisingly simple experiments, was that animals sometimes prefer exaggerated versions of natural cues to real ones. He found that birds would preferentially sit on a clutch of 5 eggs, ignoring their usual clutch of 3 eggs. In one of his most famous experiments – the source of the photo I remember – Tinbergen demonstrated that birds would abandon their own eggs to incubate comically oversized artificial eggs that they could barely balance on. (The images below are those sketched by Tinbergen. As it turns out he was a pretty good artist and photographer.)

His experiments revealed similar patterns across different species. For instance, gull chicks naturally peck at the red spot on their parent’s beak to stimulate feeding, but Tinbergen found they preferred pecking at an isolated red dot on a stick that was larger and more vivid than their parent’s natural marking. (And of course, the stick looked nothing at all like mommy… but that didn’t matter.)

Similarly, male stickleback fish, which normally attack rivals with red bellies during mating season, would make more aggressive charges at crude models with exaggerated bright red undersides (that did not even look like fish) than at actual competing males.

Tinbergen called these exaggerated triggers “supernormal stimuli” – artificial cues that exploit animals’ instinctive responses by amplifying the features they’re programmed to notice. This phenomenon has been discovered across other contexts: male butterflies pursuing objects with impossibly fast wing-flapping, birds choosing to learn songs with unnaturally extended patterns, and moths flying toward synthetic pheromones over actual female scents.

What Tinbergen had found, were essentially “hacks” of an animal’s perceptual systems. Roosting on a larger egg may raise the probability that the chick has a higher chance of surviving. In other words, paying more attention to a larger egg makes (evolutionary) sense. Till, of course, a devious ethologist comes along, who creates fake eggs to just mess with your minds.

This groundbreaking research continues to influence fields from evolutionary biology to psychology and even modern marketing.

What’s particularly intriguing is how easily these animal instincts can be hijacked by human technology. In my previous writing, I described how beavers respond more vigorously to recordings of running water played through speakers than to actual flowing streams. The recorded sounds amplify the key acoustic features that trigger the beaver’s dam-building instinct, creating yet another supernormal stimulus that the animals find irresistible.

This vulnerability to exaggerated cues isn’t limited to animals with simple brains. As humans, we like to think our advanced cognitive abilities protect us from such manipulation, but psychological research suggests otherwise. In his influential book “Thinking, Fast and Slow,” Daniel Kahneman explains that we operate with two distinct systems: a fast, intuitive, automatic system, and a slow, deliberate, analytical one. The problem is that we’re cognitive misers, relying heavily on our intuitive system to conserve mental energy.

This cognitive economy makes us surprisingly susceptible to our own versions of supernormal stimuli.

Consider how junk food—engineered with exaggerated levels of fat, salt, and sugar beyond anything found in nature—hijacks our evolved taste preferences.

Or how social media notifications exploit our social instincts with constant signals of approval and connection.

Even sophisticated consumers who understand these effects intellectually still respond to them emotionally, showing that awareness alone doesn’t neutralize these powerful instinctual pulls.

Despite our capacity for reflection and higher-order thinking, our brains remain vulnerable to the same fundamental principle that Tinbergen discovered decades ago: exaggerated stimuli create exaggerated responses, often bypassing our rational defenses entirely.

This pattern of vulnerability revealed by Kahneman’s work leads us to question a profound misconception about how we engage with media and technology. It takes less cognitive effort to believe “our eyes” so to speak than to be critical of it. In a previous post I made the argument that the idea that we willingly suspend disbelief when we engage with stories, films or other media gets it backward. Our default state is not disbelief it is belief.

Belief is easy and automatic – what psychologists call a System 1 process in Kahneman’s framework. Disbelief, on the other hand, requires deliberate cognitive effort through our System 2 thinking. This isn’t a design flaw but a feature of our evolutionary programming. In the ancestral environment, quickly accepting apparent reality at face value was often more adaptive than pausing for critical analysis when confronted with potential threats or opportunities.

This cognitive architecture served us well in a world of direct experience, but it becomes problematic in our modern media landscape. Our stone-age brains weren’t designed to navigate the artificial realities created by books, television, social media, and virtual environments.

When we see a photograph, watch a film, or scroll through social feeds, our System 1 processes respond as if we’re witnessing reality directly.

The image of food on Instagram activates hunger circuits; fictional characters in novels trigger genuine emotional attachments; carefully curated social media profiles elicit real feelings of inadequacy or envy.

The real challenge we face isn’t suspending disbelief but consciously suspending belief – actively engaging our critical faculties against the powerful current of our automatic acceptance. This requires mental effort that we’re naturally inclined to avoid as cognitive misers. And it becomes exponentially more difficult when the medium in question actively responds to us in ways that trigger our social instincts.

Even when intellectually aware of the constructed nature of these media experiences, our automatic belief systems respond to them as authentic. As Kahneman often said, “decades of studying the human mind had taught him how to recognize—but not how to avoid—these pitfalls of decision-making.” He accepted he was as vulnerable as any other person.

The emergence of generative AI chatbots represents perhaps the most sophisticated supernormal stimulus yet created. These systems are explicitly designed to trigger our social instincts with responses that often amplify the most appealing aspects of human interaction—unconditional positive regard, endless patience, and perfect attentiveness—without the natural limitations or friction of real relationships.

What makes these AI companions particularly potent is how they exploit the same psychological vulnerabilities Tinbergen identified, but with unprecedented precision. Unlike beaver recordings or oversized eggs, AI systems can adapt in real-time to our individual responses, continually optimizing their approach to maximize our engagement. They learn which conversational patterns keep us coming back, creating a personalized supernormal stimulus that feels uniquely tailored to our needs.

Our tendency to anthropomorphize – to attribute human qualities to non-human entities – compounds this vulnerability. As I’ve explored in previous writing, we anthropomorphize instinctively, almost compulsively. We see faces in clouds, personalities in our cars, and intelligence in our smart speakers. This isn’t a conscious choice but an automatic function of our pattern-recognition systems.

The tragic case of the teenager who committed suicide after interactions with a Replika chatbot represents an extreme outcome, but it illuminates a broader reality: these systems can forge powerful emotional connections that feel authentic to the human brain, despite being algorithmically generated. Our evolved psychology doesn’t contain built-in defenses against entities that simulate human connection without actually experiencing it.

We vary in our susceptibility, of course. Some people maintain clear boundaries with AI systems, while others develop deep attachments. Factors like loneliness, age, and personality all influence vulnerability.

But these individual differences play out against a backdrop of powerful structural forces. The relentless hype around AI—breathless news coverage, science fiction narratives, and corporate marketing—primes us to see these systems as more capable and conscious than they actually are. These broader cultural currents shape our expectations and responses before we ever type our first prompt.

Meanwhile, the attention economy that underlies much of our digital media use deliberately maximizes engagement through psychological hooks. AI companies aren’t immune to these incentives. They deliberately design their chatbots to be congenial, agreeable, and emotionally resonant—not because this makes the technology more useful, but because it makes it more addictive.

When an AI responds with apparent empathy, remembers your preferences, or adapts to your communication style, it’s executing carefully engineered strategies to deepen your attachment and extend your usage time.

The uncomfortable truth is that none of us is entirely immune to these combined forces of individual psychology, cultural narratives, and deliberate design choices. Even those who intellectually understand the mechanisms can find themselves responding emotionally to a chatbot’s expressions of concern or encouragement.

Just as Tinbergen’s birds couldn’t help sitting on oversized eggs, parts of our social brain respond automatically to patterns that resemble human connection—even when our rational mind knows it’s synthetic.

In many ways, we are all like that bird from my childhood science book—desperately trying to balance atop an egg too large to be real, while our rational minds slip off the sides. The difference is that we can recognize our predicament. The question remains whether that awareness alone will be enough to help us regain our balance.

A few randomly selected blog posts…

Sitar Hero!

Why I love the Internets 🙂

Back from India…

Got back yesterday from a short, hectic but sweet trip to India. I had a wonderful time and still have a lot to do to just document all that happened and connect with all the people I met (hopefully over the next few weeks)... but now it is time to get back to fall...

Rethinking Ed Tech Research…

I have been a huge fan of Don Norman ever since I first ran into his book on the Psychology of Everyday Things (which he later renamed as The Design of Everyday Things, and the story behind that name change is worth reading as an excellent example of design). Don...

New ambigram: Nihal

My friend, Hartosh (I had written previously about his mathematical novel here) and his wife Pam, recently had a baby boy. This ambigram is of his name: Nihal Enjoy.

Correlates of creativity

Just came across this on the Ph.D. design list (a listserv for discussion of PhD studies and related research in Design) from a posting by Charles Burnette. He quotes Donald MacKinnon, author of a large study on creativity in the arts, sciences and professions: If I...

Designing learning in a transformed world: Keynote

Designing learning in a transformed world: Keynote

I was recently invited to present virtually at The Heart of Innovation Summer Summit, organized by the Heartland Area Education Agency in Iowa. The video of my talk can be seen below. Maybe my first serious keynote talk about generative AI and education. Enjoy...

Education & the Rise of AI Influencers

Education & the Rise of AI Influencers

I have been thinking hard about the nature of generative AI, what sets it apart from other technologies that have come in the past. It seems to me there are two key factors. The first is its ability to engage in dialogue, in natural language and the second are its...

Presentation/Workshop at Twente

I just completed a presentation at the symposium organized by the Department of Curriculum Design & Educational Innovation, University of Twente. Later this afternoon I will be conducting a workshop on creativity and the TPACK framework. The slides for both the...

Happy Thanksgiving, 2 new ambigrams

Thanksgiving is my favorite holiday. I wake up every day just feeling incredibly lucky for what I have - and to have a special day devoted to celebrating that idea... how very cool. So here are two new and unique ambigram designs to celebrate this wonderful day. The...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *