The Conscious Suspension of Belief: Getting Smart about Human-AI Interaction

by | Sunday, October 13, 2024

A classic tale of early cinema recounts the 1896 Paris screening of the Lumière brothers’ “L’arrivée d’un train en gare de La Ciotat” (Arrival of a Train at La Ciotat Station). According to popular accounts, some viewers reportedly reacted with panic to the realistic image of an approaching train, scrambling to escape what they perceived as a real locomotive bearing down upon them. The original video is embedded below.

It’s a captivating story, one that’s been repeated countless times to illustrate the power of cinema. There’s just one problem, this probably never happened. Recent historical research suggests this tale of terrified moviegoers is likely an urban legend, a colorful exaggeration that’s taken on a life of its own.

This historical anecdote, myth and all, emphasizes a specific paradox about media representations—they are both real and unreal. And our job when confronted by them is to ignore their unreality and embrace it as if it were true. For instance in the case of the Lumiere brothers film, viewers knew, on a rational level, that they were watching a film projection. Yet their perceptual systems responded as if a three-dimensional object was hurtling towards them.

While the extreme reaction to the Lumière brothers’ film may be exaggerated, it illustrates a fundamental aspect of how we engage with media. This tension between knowing something is not real and yet responding as if it were, has long fascinated scholars and creators alike. One of the first people to try to explain this phenomenon was the English poet, Samuel Taylor Coleridge.

Back in 1817, Coleridge argued, specifically in the context of reading, readers had to deliberately set aside their skepticism in order to immerse themselves in the world of fiction and literature. He called this process a ‘willing suspension of disbelief.’

This is important to understand since this duality of representation – being both real and unreal – is inherent in all media depictions and representations. Picasso’s bull series (below) illustrates this brilliantly, as does Magritte’s famous “The Treachery of Images” with its painted pipe captioned “Ceci n’est pas une pipe” (This is not a pipe). We see the bull in Picasso’s drawings, even though it becomes more and more abstract in its depiction.

And we see the pipe in Magritte’s painting as being a pipe even though we know it is just a painting. As in the image below which suggests that the pipe is real but rather, this is NOT a painting by Magritte. This is the tyranny of images (which incidentally was the title of Magritte’s painting of a not-pipe!

“This is not a painting by Magritte.” Design by Punya Mishra based on a painting Magritte

In each case (whether Picasso’s geometric bull, or Magritte’s pipe), our minds bridge the gap between representation and reality. We see a bull, a pipe – yet we know they’re just marks on canvas (or on screen). This cognitive flexibility, perceiving the real in the unreal, is both a remarkable feature of human perception and a potential pitfall in our increasingly mediated world.

The Myth of Suspended Disbelief

Most discussions of our response to media take the idea of “willing suspension of disbelief” as being central to how we respond to media. Media representations (paintings, films, video games) are all unreal and we consciously choose to “suspend our disbelief” when engaging with them.

The fundamental idea here is that belief is easy, disbelief requires cognitive effort, and we will somewhat automatically go with the one that requires less effort.

The Default of Belief: Insights from Cognitive Science

In his seminal work “Thinking, Fast and Slow,” Daniel Kahneman introduces us to two systems of thought. System 1 is fast, intuitive, and emotional, operating automatically and quickly, with little or no effort. System 2, on the other hand, is slower, more deliberative, and logical, allocating attention to effortful mental activities that demand it, including complex computations.

Our tendency to believe is a System 1 process. It’s quick, automatic, and requires little mental effort. Skepticism, on the other hand, is a System 2 process. It’s slow, deliberate, and cognitively demanding.

Consider our reaction to a well-crafted film. System 1 allows us to become emotionally invested in the characters and story, laughing or crying along with the events on screen. This happens automatically and effortlessly. A good example is the Hider and Simmel abstract videos I had mentioned in a previous post. System 2, however, is what allows us to step back and critically analyze the film’s themes, cinematography, or historical accuracy. This requires conscious effort and doesn’t happen unless we deliberately engage in it.

But what happens when these cognitive shortcuts—our System 1 processes—encounter stimuli they weren’t evolved to handle? In my previous post (Beavers, Brains, & Chatbots: Cognitive Illusions in the age of AI) I had explained that beavers, nature’s adorable engineer, have been programmed (by blind evolution) to build dams at the SOUND of running water. This made perfect sense in a world without tape recorders. But not so much when scientists started creeping around in the undergrowth placing recorders to track the behavior of these animals, the beavers were fooled.

And now we have generative AI.

Artificial Intelligence presents an even more complex challenge to our cognitive processes. Unlike traditional media or simple recordings, AI can actively respond and adapt to our inputs, creating an illusion of intelligence and understanding that’s far more convincing than any static medium. This dynamic interaction taps into our social instincts and cognitive biases in unprecedented ways, making it even harder for our System 1 thinking to distinguish between genuine intelligence and sophisticated simulation.

The AI Challenge: When Our Cognitive Shortcuts Fail Us

This interplay between representation and reality becomes exponentially more complex when we consider representations of the mind itself. Here, form and content become inextricably intertwined. Our engagement with AI exemplifies this conundrum – the only metaphors we have for artificial intelligence are mental ones, drawn from our understanding of human cognition. In a previous blog post, I explored this very issue, highlighting how our tendency to anthropomorphize AI stems (partly) from our limited conceptual toolkit. We describe AI systems in terms of “thinking,” “learning,” or “deciding,” not because these terms are necessarily accurate, but because that’s the only language we have at hand. This linguistic and conceptual limitation blurs the lines between artificial processes and human cognition, making our interactions with AI particularly messy and prone to misinterpretation.

In a recent blog post, I explored how our tendency to anthropomorphize extends to our interactions with AI. Just as we see faces in clouds or attribute personalities to our cars, we can’t help but see human-like qualities in AI systems that can engage in natural language conversations, remember context, and simulate creativity.

If it walks like a duck, talks like a duck… it most probably is a duck. At least that’s how our stone age brains function. It is almost like we don’t have a choice. We will anthropomorphize instinctively, almost compulsively. System 1 kicking in. Just as the beaver responds to recorded water sounds, we respond to artificial intelligence as if it were truly intelligent. Our logical brain knows it’s not sentient, yet at another level, we can’t help but interact with it as if it were.

The Fundamental Tension

Yet, we must acknowledge a fundamental tension in this endeavor. Just as films can still move us – make us laugh, cry, and inspire us to action – despite our knowledge that they’re merely flickering images on a screen, AI has the power to captivate us in similar ways. We may intellectually understand that we’re interacting with a complex algorithm or generative computer program, but emotionally, we can still find ourselves drawn in. We connect, we share, we react as if this software had feelings, beliefs, and desires.

This phenomenon is strikingly evident in recent studies on human-AI interactions. As I’ve discussed in a previous blog post (Turing’s Tricksters), people are revealing far more personal information to AI chatbots than one might expect. Users often disclose intimate details, seek emotional support, or share vulnerabilities with these AI systems, despite knowing they’re conversing with software.

This is the paradox we face: AI will continue to capture us in its web, even as we strive to maintain our critical faculties. But it’s crucial to remember that this web is, in large part, of our own creation. It’s a reflection of our innate tendencies, our cognitive biases, and our deeply human need to connect and find meaning. It is important to hold these two realities in balance.

However, I worry that despite our best efforts, we may not be able to fully resist this pull. Our tendency to anthropomorphize and our default state of belief are deeply rooted in our evolutionary history. They cannot be changed at the flick of a switch. Just as we can’t help but see faces in clouds or flinch at sudden movements on a movie screen, we may find ourselves inexorably drawn into emotional engagement with AI, even as our rational minds protest.

A few randomly selected blog posts…

Bad poetry time: Clerihews

Just when you thought I had run through all the bad poetry I can spew (see here for my palindromic poems) here is another set of poems I had all but forgotten about. A few years ago I got hooked into writing Clerihews. For the uninitiated: The clerihew is a bit of...

Technology & research

Patrick Dickson just forwarded an article in the APA Monitor titled Beyond chalk and talk, in which Art Graesser, the new editor of Journal of Educational Psychology, indicates an openness to including more technology related articles in JEP. Patrick argued that this...

4 new ambigrams (STEM, STEAM, Research & Gandhi)

Here are four new ambigrams I have created over the past few days. All related in some ways to things I have been thinking about. The first two are for STEM (an acronym for Science, Technology, Engineering & Mathematics) and STEAM (Science, Technology,...

ChatGPT3 is bulls*** artist

ChatGPT3 is bulls*** artist

Back in 1986 the philosopher Harry G. Frankfurt wrote an essay titled "On bullshit" which he then expanded into a book, published in 2005. Essentially, the essay and the book are a rumination on the distinction between "bullshitters" and "liars." He argues that:...

Ambigrams animated: 3 new designs

I love creating ambigrams, words written in such a manner that they can be read from multiple perspective - rotated, reflected and so on. These designs are much easier to "grasp" when printed on paper since you can actually turn the paper around, hold it against a...

Triplet from China

The triplet ambigrams keep flying in. This new one came in an email from Chunlei Zhang, a faculty member at East China Normal University, having received his Ph.D. in Curriculum & Teaching from Beijing Normal University. He was inspired after reading my previous...

Photos from SITE08

Matt has Flickrd photos from SITE08. Some of these photos are taken by me, but the rules are that the owner of the camera automatically gets the bragging rights 🙂 and since I didn't take my camera along, he takes credit for all the pictures. Given that a bunch of...

On making computation visible

Here is a cool video about a "a mechanical, binary adding machine that uses marbles to flip the bits" - in other words a computer made of wood, that works at a pace that we can grasp! Marvelous. (HT: Collision Detection). Check out the video: [youtube width="425"...

Fact / Fiction, ambigram

Yesterday after I had posted my two latest ambigrams (see them here) I got a message on Facebook from my cousin Sonny (the one who composed the cool music for my Explore, Create videos) saying Big deal. I can make "fact" and "fiction" blur together till they are...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *