A classic tale of early cinema recounts the 1896 Paris screening of the Lumière brothers’ “L’arrivée d’un train en gare de La Ciotat” (Arrival of a Train at La Ciotat Station). According to popular accounts, some viewers reportedly reacted with panic to the realistic image of an approaching train, scrambling to escape what they perceived as a real locomotive bearing down upon them. The original video is embedded below.
It’s a captivating story, one that’s been repeated countless times to illustrate the power of cinema. There’s just one problem, this probably never happened. Recent historical research suggests this tale of terrified moviegoers is likely an urban legend, a colorful exaggeration that’s taken on a life of its own.
This historical anecdote, myth and all, emphasizes a specific paradox about media representations—they are both real and unreal. And our job when confronted by them is to ignore their unreality and embrace it as if it were true. For instance in the case of the Lumiere brothers film, viewers knew, on a rational level, that they were watching a film projection. Yet their perceptual systems responded as if a three-dimensional object was hurtling towards them.
While the extreme reaction to the Lumière brothers’ film may be exaggerated, it illustrates a fundamental aspect of how we engage with media. This tension between knowing something is not real and yet responding as if it were, has long fascinated scholars and creators alike. One of the first people to try to explain this phenomenon was the English poet, Samuel Taylor Coleridge.
Back in 1817, Coleridge argued, specifically in the context of reading, readers had to deliberately set aside their skepticism in order to immerse themselves in the world of fiction and literature. He called this process a ‘willing suspension of disbelief.’
He argued that that fully experiencing media requires a CONSCIOUS CHOICE to temporarily overlook its inherent artifice, withholding judgment and accepting the premises established by the author or creator. This willingness to engage allows audiences to connect emotionally with stories, even when they contain fantastical or improbable elements. This concept of, willing suspension of disbelief, has since shaped our understanding of audience engagement and storytelling techniques across all media forms.
This is important to understand since this duality of representation – being both real and unreal – is inherent in all media depictions and representations. Picasso’s bull series (below) illustrates this brilliantly, as does Magritte’s famous “The Treachery of Images” with its painted pipe captioned “Ceci n’est pas une pipe” (This is not a pipe). We see the bull in Picasso’s drawings, even though it becomes more and more abstract in its depiction.
And we see the pipe in Magritte’s painting as being a pipe even though we know it is just a painting. As in the image below which suggests that the pipe is real but rather, this is NOT a painting by Magritte. This is the tyranny of images (which incidentally was the title of Magritte’s painting of a not-pipe!
“This is not a painting by Magritte.” Design by Punya Mishra based on a painting Magritte
In each case (whether Picasso’s geometric bull, or Magritte’s pipe), our minds bridge the gap between representation and reality. We see a bull, a pipe – yet we know they’re just marks on canvas (or on screen). This cognitive flexibility, perceiving the real in the unreal, is both a remarkable feature of human perception and a potential pitfall in our increasingly mediated world.
The Myth of Suspended Disbelief
Most discussions of our response to media take the idea of “willing suspension of disbelief” as being central to how we respond to media. Media representations (paintings, films, video games) are all unreal and we consciously choose to “suspend our disbelief” when engaging with them.
But what if we’ve got it backwards?
What if our default state isn’t disbelief, but belief? Being critical and questioning isn’t our natural mode – it’s hard cognitive labor.
The fundamental idea here is that belief is easy, disbelief requires cognitive effort, and we will somewhat automatically go with the one that requires less effort.
The Default of Belief: Insights from Cognitive Science
In his seminal work “Thinking, Fast and Slow,” Daniel Kahneman introduces us to two systems of thought. System 1 is fast, intuitive, and emotional, operating automatically and quickly, with little or no effort. System 2, on the other hand, is slower, more deliberative, and logical, allocating attention to effortful mental activities that demand it, including complex computations.
Our tendency to believe is a System 1 process. It’s quick, automatic, and requires little mental effort. Skepticism, on the other hand, is a System 2 process. It’s slow, deliberate, and cognitively demanding.
Consider our reaction to a well-crafted film. System 1 allows us to become emotionally invested in the characters and story, laughing or crying along with the events on screen. This happens automatically and effortlessly. A good example is the Hider and Simmel abstract videos I had mentioned in a previous post. System 2, however, is what allows us to step back and critically analyze the film’s themes, cinematography, or historical accuracy. This requires conscious effort and doesn’t happen unless we deliberately engage in it.
Kahneman’s work shows us that our brains are constantly looking for shortcuts, ways to conserve energy while making sense of the world. Believing what we see, hear, or read is cognitively easier than questioning it. This default to belief isn’t a bug, but a feature – one that’s served us well throughout our evolutionary history.
But what happens when these cognitive shortcuts—our System 1 processes—encounter stimuli they weren’t evolved to handle? In my previous post (Beavers, Brains, & Chatbots: Cognitive Illusions in the age of AI) I had explained that beavers, nature’s adorable engineer, have been programmed (by blind evolution) to build dams at the SOUND of running water. This made perfect sense in a world without tape recorders. But not so much when scientists started creeping around in the undergrowth placing recorders to track the behavior of these animals, the beavers were fooled.
And now we have generative AI.
Artificial Intelligence presents an even more complex challenge to our cognitive processes. Unlike traditional media or simple recordings, AI can actively respond and adapt to our inputs, creating an illusion of intelligence and understanding that’s far more convincing than any static medium. This dynamic interaction taps into our social instincts and cognitive biases in unprecedented ways, making it even harder for our System 1 thinking to distinguish between genuine intelligence and sophisticated simulation.
The AI Challenge: When Our Cognitive Shortcuts Fail Us
This interplay between representation and reality becomes exponentially more complex when we consider representations of the mind itself. Here, form and content become inextricably intertwined. Our engagement with AI exemplifies this conundrum – the only metaphors we have for artificial intelligence are mental ones, drawn from our understanding of human cognition. In a previous blog post, I explored this very issue, highlighting how our tendency to anthropomorphize AI stems (partly) from our limited conceptual toolkit. We describe AI systems in terms of “thinking,” “learning,” or “deciding,” not because these terms are necessarily accurate, but because that’s the only language we have at hand. This linguistic and conceptual limitation blurs the lines between artificial processes and human cognition, making our interactions with AI particularly messy and prone to misinterpretation.
In a recent blog post, I explored how our tendency to anthropomorphize extends to our interactions with AI. Just as we see faces in clouds or attribute personalities to our cars, we can’t help but see human-like qualities in AI systems that can engage in natural language conversations, remember context, and simulate creativity.
If it walks like a duck, talks like a duck… it most probably is a duck. At least that’s how our stone age brains function. It is almost like we don’t have a choice. We will anthropomorphize instinctively, almost compulsively. System 1 kicking in. Just as the beaver responds to recorded water sounds, we respond to artificial intelligence as if it were truly intelligent. Our logical brain knows it’s not sentient, yet at another level, we can’t help but interact with it as if it were.
The Fundamental Tension
Yet, we must acknowledge a fundamental tension in this endeavor. Just as films can still move us – make us laugh, cry, and inspire us to action – despite our knowledge that they’re merely flickering images on a screen, AI has the power to captivate us in similar ways. We may intellectually understand that we’re interacting with a complex algorithm or generative computer program, but emotionally, we can still find ourselves drawn in. We connect, we share, we react as if this software had feelings, beliefs, and desires.
This phenomenon is strikingly evident in recent studies on human-AI interactions. As I’ve discussed in a previous blog post (Turing’s Tricksters), people are revealing far more personal information to AI chatbots than one might expect. Users often disclose intimate details, seek emotional support, or share vulnerabilities with these AI systems, despite knowing they’re conversing with software.
This is the paradox we face: AI will continue to capture us in its web, even as we strive to maintain our critical faculties. But it’s crucial to remember that this web is, in large part, of our own creation. It’s a reflection of our innate tendencies, our cognitive biases, and our deeply human need to connect and find meaning. It is important to hold these two realities in balance.
Thus, the challenge isn’t to suspend disbelief, but rather to take on the far more difficult task that of suspending belief. We need to train ourselves to engage System 2 thinking when interacting with AI, to consciously remind ourselves that the apparent intelligence, creativity, and emotional responses are simulations, not genuine experiences. We must learn to dance on the edge of belief and skepticism while always maintaining that crucial thread of conscious, critical engagement.
However, I worry that despite our best efforts, we may not be able to fully resist this pull. Our tendency to anthropomorphize and our default state of belief are deeply rooted in our evolutionary history. They cannot be changed at the flick of a switch. Just as we can’t help but see faces in clouds or flinch at sudden movements on a movie screen, we may find ourselves inexorably drawn into emotional engagement with AI, even as our rational minds protest.
0 Comments