Imagine a world where tape recorders fool beavers, triangles tell stories, and AI convinces us it’s sentient.
Welcome to reality—where our cognitive biases are colliding with technology in ways we’re only beginning to understand. In this post, I focus on our tendency to over-anthropomorphize and argue that it is a cognitive illusion we have no choice but to fall prey to. This involuntary attribution of human-like qualities has profound implications for how we think about, talk about, and work with generative AI.
Let’s start with this video…created for an experiment back in the 1940’s by the psychologists Heider and Simmel.
What did you see?
If you’re like most people, you saw a story unfold. A tale of pursuit, conflict, perhaps even romance.
Despite no explicit narrative or human figures, nearly all viewers interpreted these simple shapes as characters with intentions, emotions, and personalities. Participants often described complex social scenarios, such as bullying, chasing, or romantic interactions, projecting rich narratives onto the random movements of triangles and circles.
This experiment revealed a fundamental truth about human cognition: we are hardwired to see intentionality, even where none exists. It points to our innate predisposition to anthropomorphize and see purposeful behavior in even the most abstract stimuli, highlighting a fundamental aspect of human social cognition that influences how we interpret the world around us.
Aside: As it happens not all people “see” these scenarios. In the original study 33 out of 34 people saw the “human” story (as it were) but there was one person who did not. This individual described the scene in purely geometric terms. And that person is not alone. Multiple experiments have shown that there are a small out significant number of people who do not fall prey to this illusion. They see right through to the truth. It turns out that these people are often on the Autism spectrum who have a more literal interpretation of the video. There is some evidence for cultural differences in responses as well. For instance, some research has shown that while most participants across cultures saw intentionality, the specific interpretations and the degree of anthropomorphization varied. Finally, more open-ended instructions tend to elicit more anthropomorphic descriptions, while more specific or analytical prompts may reduce this tendency.
From Shapes to Machines
This tendency doesn’t stop with abstract animations. We anthropomorphize relentlessly.
Nass and Reeves (and some research I was involved with almost 2 decades ago) demonstrated that this tendency to anthropomorphize goes beyond films of geometric objects. For instance see this article: Does my wordprocessor have a personality? and this Affective Feedback from Computers and its Effect on Perceived Ability and Affect.
Essentially what this line of work argued was that we attribute personality to ATMs. We argue with GPS systems. We name our cars.
In my work I called this Toppfer’s Law (named after Rudolphe Toppfer, the father of the comic book). Topffer’s law modified for the world of digital technology states:
Topffer’s Law: All interfaces (however badly developed) have personality; and that personality can be generated through the subtlest of cues.
It’s not a choice. It’s a cognitive reflex.
Why does this happen? Stone Age Minds in the Age of AI
The best analogy I have of explaining this is by looking at the behavior of another mammal and its response to media. Lets look at the beaver: nature’s adorable engineer, constructing intricate structures from wood, mud, and stones to create ponds that provide protection from predators and easy access to food. These furry construction workers use their sharp teeth to fell trees and shape branches, strategically placing materials to control water flow and transform their environment.
How does the beaver know where to build it dams however?
As it turns out, evolution has programmed beavers to build dams at the sound of running water. Not the sight. Not the feel. The sound. As this article says, The Sound of Running Water Puts Beavers in the Mood to Build. And this makes perfect evolutionary sense. There is almost a 100% correlation in the real world, of the sound of running water with the presence of running water. Sound of water = Presence of water!
What evolution didn’t prepare beavers for was the existence of another industrious mammal – the intrepid professor studying animal behavior. And what it really didn’t prepare the beaver for was the modern tape recorder.
Which is what they ran into with Swedish biologist, Lars Wilsson, in the 1960s. As the article describes in detail, Wilsson’s experiment revealed the power of auditory cues in beaver behavior. When played the sound of running water through a speaker, beavers immediately began building dams—even on dry concrete floors. Remarkably, when presented with both a silent but visible water leak and the sound of running water, the beavers ignored the actual leak, choosing instead to build over the speaker. This demonstrates how strongly their instincts are tied to auditory stimuli, even when visual evidence contradicts it.
Their brains were fooled by media.
We’re not so different.
We have stone age minds in the age of AI. Our cognitive reflexes, honed over millennia, are now facing stimuli their evolution never anticipated.
We are, at our core, social beings. A significant portion of our brain is devoted to understanding others – their intentions, emotions, and thoughts. This is our theory of mind, and it’s hypersensitive.
It kicks in with the smallest of cues. And it cannot be stopped.
Anthropomorphization isn’t a choice. It’s a cognitive illusion. And one that we cannot stop from kicking in.
There are those who argue that it is wrong to anthropomorphize – and at one level they are absolutely right. Nothing I am saying implies that these LLMs have beliefs and desires and other psychological states and it IS incorrect of us to make those attributions. What I AM saying that these illusions cannot be wished away just by us deciding not to do so.
See the extremely common optical illusion below. And despite the fact that we KNOW these two lines are of the same length one still looks longer than the other. (This is something our family has played with for years in creating our New Year’s Videos – this and this may be the best example of how our minds can be fooled).
And just as we can’t see the Müller-Lyer illusion lines as the same length even when measured, the beaver ignores the actual running water and builds a dam where it thinks the sound of running water is coming from.
Enter AI: The Ultimate Anthropomorphic Trigger
And now we live in a world of AI.
A technology that has the ability to produce coherent, contextually relevant responses and engage in natural language conversations across diverse topics. These technologies can simulate creativity in tasks like writing or art creation, mimic human language patterns, and adapt to user preferences. Further they can remember context, display apparent knowledge, express uncertainty, and admit mistakes.
Is it surprising that they create the impression of a unique personality with self-awareness? Is it any surprise that we will attribute human-like qualities such as emotions, thoughts, and consciousness to it?
I mean we did that with triangles and squares bumping around on a white screen.
Our stone age brains don’t stand a chance.
Our brains, primed by evolution to detect agency, are faced with entities that mimic human-like responses with unprecedented fidelity. Just as the beaver responds to recorded water sounds, we respond to artificial intelligence as if it were truly intelligent.
The result? We anthropomorphize instinctively, almost compulsively.
The consequences of this instinctive anthropomorphization are far-reaching and potentially profound. As we interact with increasingly sophisticated AI, we open ourselves up to unprecedented levels of emotional manipulation. The socio-emotional development of younger generations, growing up with these technologies, may veer into uncharted territory. Our ethical frameworks, designed for human-to-human interaction, become muddied as we grapple with treating non-sentient entities as moral agents. Yuval Harari, in his recent essay What happens when bots compete for our love eloquently explores these implications, highlighting the urgent need for a new understanding of our relationship with AI. As Harari suggests, we’re not just creating new tools; we’re reshaping the very nature of human experience and societal structures.
I have been struggling with these questions for a while, for instance see this paper about children playing with anthropomorphic toys published back in 2009: Is AIBO real? Children and anthropomorphic toys.
The Language Trap
Even as we intellectually understand that these AIs are not truly agentic, we find ourselves lacking the language to describe their behaviors without resorting to anthropomorphic terms. We speak of AI “thinking,” “deciding,” or “wanting” not because these terms are accurate, but because they serve as cognitive shortcuts, allowing us to grapple with complex systems in familiar terms.
Of course, it does not help that in the competitive AI landscape, companies are increasingly adopting human-like descriptors for their technologies, blurring the lines between artificial and human intelligence. As Matteo Wong writes in The Atlantic (in an article titled: OpenAI’s big reset):
… OpenAI is communicating, plainly and forcefully, a claim to have built software that more closely approximates our minds. Many rivals have taken this tack as well. The start-up Anthropic has described its leading model, Claude, as having “character” and a “mind”; Google touts its AI’s “reasoning” capabilities; the AI-search start-up Perplexity says its product “understands you.” According to OpenAI’s blogs, o1 solves problems “similar to how a human may think,” works “like a real software engineer,” and reasons “much like a person.” The start-up’s research lead told The Verge that “there are ways in which it feels more human than prior models,” but also insisted that OpenAI doesn’t believe in equating its products to our brains.
This is a reflection, to me, not just of an attempt by these companies to make their products seem cooler and smarter than they actually are (which is the point that Matteo makes later in the article) but also of the fact that we lack the right words and metaphors do describe this new technology.
We use “brain” or “cognition” based metaphors not because it’s accurate, but because it’s the only language we have (something I have written about elsewhere).
The Programmer’s Paradox
Here’s the twist: the mindset best suited for creating AI may be the worst at navigating its social implications.
Here’s where we encounter a fascinating paradox. The very mindset that excels at creating these AI systems – the programming mindset that sees LLMs as deterministic algorithms – may be the least equipped to navigate the social and emotional landscape these technologies create.
Call it machine EQ, if you will.
Aside 2: Connecting to something I had mentioned earlier, the ability that those on the autism spectrum have of seeing the reality of the Hider-Simmel videos also makes them great programmers. There is some evidence to show that individuals on the spectrum often excel in programming and related tasks. For instance, Simon Baron-Cohen has shown that individuals with ASD are overrepresented in STEM fields, including computer science. Interestingly, their superpower that makes them highly proficient at coding may actually be getting in the way of truly working with these new technologies.
The Path Forward
In conclusion, as we dance with increasingly sophisticated AI, we must remember that the steps we’re following are largely choreographed by our own cognitive biases. Our anthropomorphic tendencies, once crucial for survival and social cohesion, now present both a tool and a trap in our technological future. The key to navigating this new landscape may not lie in changing the AI, but in better understanding and adapting ourselves.
To avoid being “sucked in” by AI, we need a deeper understanding of ourselves. Of our biases. Our cognitive shortcuts. We need new frameworks to discuss AI that acknowledge both its algorithmic nature and its impact on our social cognition. We need to develop a form of technological emotional intelligence.
Because in the end, the biggest challenge isn’t understanding AI. It’s understanding ourselves.
Addendum (posted September 16, 2024)
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine, and here is a video of Jeff Krichmar talking about some of the Darwin automata