The Conscious Suspension of Belief: Getting Smart about Human-AI Interaction

by | Sunday, October 13, 2024

A classic tale of early cinema recounts the 1896 Paris screening of the Lumière brothers’ “L’arrivée d’un train en gare de La Ciotat” (Arrival of a Train at La Ciotat Station). According to popular accounts, some viewers reportedly reacted with panic to the realistic image of an approaching train, scrambling to escape what they perceived as a real locomotive bearing down upon them. The original video is embedded below.

It’s a captivating story, one that’s been repeated countless times to illustrate the power of cinema. There’s just one problem, this probably never happened. Recent historical research suggests this tale of terrified moviegoers is likely an urban legend, a colorful exaggeration that’s taken on a life of its own.

This historical anecdote, myth and all, emphasizes a specific paradox about media representations—they are both real and unreal. And our job when confronted by them is to ignore their unreality and embrace it as if it were true. For instance in the case of the Lumiere brothers film, viewers knew, on a rational level, that they were watching a film projection. Yet their perceptual systems responded as if a three-dimensional object was hurtling towards them.

While the extreme reaction to the Lumière brothers’ film may be exaggerated, it illustrates a fundamental aspect of how we engage with media. This tension between knowing something is not real and yet responding as if it were, has long fascinated scholars and creators alike. One of the first people to try to explain this phenomenon was the English poet, Samuel Taylor Coleridge.

Back in 1817, Coleridge argued, specifically in the context of reading, readers had to deliberately set aside their skepticism in order to immerse themselves in the world of fiction and literature. He called this process a ‘willing suspension of disbelief.’

This is important to understand since this duality of representation – being both real and unreal – is inherent in all media depictions and representations. Picasso’s bull series (below) illustrates this brilliantly, as does Magritte’s famous “The Treachery of Images” with its painted pipe captioned “Ceci n’est pas une pipe” (This is not a pipe). We see the bull in Picasso’s drawings, even though it becomes more and more abstract in its depiction.

And we see the pipe in Magritte’s painting as being a pipe even though we know it is just a painting. As in the image below which suggests that the pipe is real but rather, this is NOT a painting by Magritte. This is the tyranny of images (which incidentally was the title of Magritte’s painting of a not-pipe!

“This is not a painting by Magritte.” Design by Punya Mishra based on a painting Magritte

In each case (whether Picasso’s geometric bull, or Magritte’s pipe), our minds bridge the gap between representation and reality. We see a bull, a pipe – yet we know they’re just marks on canvas (or on screen). This cognitive flexibility, perceiving the real in the unreal, is both a remarkable feature of human perception and a potential pitfall in our increasingly mediated world.

The Myth of Suspended Disbelief

Most discussions of our response to media take the idea of “willing suspension of disbelief” as being central to how we respond to media. Media representations (paintings, films, video games) are all unreal and we consciously choose to “suspend our disbelief” when engaging with them.

The fundamental idea here is that belief is easy, disbelief requires cognitive effort, and we will somewhat automatically go with the one that requires less effort.

The Default of Belief: Insights from Cognitive Science

In his seminal work “Thinking, Fast and Slow,” Daniel Kahneman introduces us to two systems of thought. System 1 is fast, intuitive, and emotional, operating automatically and quickly, with little or no effort. System 2, on the other hand, is slower, more deliberative, and logical, allocating attention to effortful mental activities that demand it, including complex computations.

Our tendency to believe is a System 1 process. It’s quick, automatic, and requires little mental effort. Skepticism, on the other hand, is a System 2 process. It’s slow, deliberate, and cognitively demanding.

Consider our reaction to a well-crafted film. System 1 allows us to become emotionally invested in the characters and story, laughing or crying along with the events on screen. This happens automatically and effortlessly. A good example is the Hider and Simmel abstract videos I had mentioned in a previous post. System 2, however, is what allows us to step back and critically analyze the film’s themes, cinematography, or historical accuracy. This requires conscious effort and doesn’t happen unless we deliberately engage in it.

But what happens when these cognitive shortcuts—our System 1 processes—encounter stimuli they weren’t evolved to handle? In my previous post (Beavers, Brains, & Chatbots: Cognitive Illusions in the age of AI) I had explained that beavers, nature’s adorable engineer, have been programmed (by blind evolution) to build dams at the SOUND of running water. This made perfect sense in a world without tape recorders. But not so much when scientists started creeping around in the undergrowth placing recorders to track the behavior of these animals, the beavers were fooled.

And now we have generative AI.

Artificial Intelligence presents an even more complex challenge to our cognitive processes. Unlike traditional media or simple recordings, AI can actively respond and adapt to our inputs, creating an illusion of intelligence and understanding that’s far more convincing than any static medium. This dynamic interaction taps into our social instincts and cognitive biases in unprecedented ways, making it even harder for our System 1 thinking to distinguish between genuine intelligence and sophisticated simulation.

The AI Challenge: When Our Cognitive Shortcuts Fail Us

This interplay between representation and reality becomes exponentially more complex when we consider representations of the mind itself. Here, form and content become inextricably intertwined. Our engagement with AI exemplifies this conundrum – the only metaphors we have for artificial intelligence are mental ones, drawn from our understanding of human cognition. In a previous blog post, I explored this very issue, highlighting how our tendency to anthropomorphize AI stems (partly) from our limited conceptual toolkit. We describe AI systems in terms of “thinking,” “learning,” or “deciding,” not because these terms are necessarily accurate, but because that’s the only language we have at hand. This linguistic and conceptual limitation blurs the lines between artificial processes and human cognition, making our interactions with AI particularly messy and prone to misinterpretation.

In a recent blog post, I explored how our tendency to anthropomorphize extends to our interactions with AI. Just as we see faces in clouds or attribute personalities to our cars, we can’t help but see human-like qualities in AI systems that can engage in natural language conversations, remember context, and simulate creativity.

If it walks like a duck, talks like a duck… it most probably is a duck. At least that’s how our stone age brains function. It is almost like we don’t have a choice. We will anthropomorphize instinctively, almost compulsively. System 1 kicking in. Just as the beaver responds to recorded water sounds, we respond to artificial intelligence as if it were truly intelligent. Our logical brain knows it’s not sentient, yet at another level, we can’t help but interact with it as if it were.

The Fundamental Tension

Yet, we must acknowledge a fundamental tension in this endeavor. Just as films can still move us – make us laugh, cry, and inspire us to action – despite our knowledge that they’re merely flickering images on a screen, AI has the power to captivate us in similar ways. We may intellectually understand that we’re interacting with a complex algorithm or generative computer program, but emotionally, we can still find ourselves drawn in. We connect, we share, we react as if this software had feelings, beliefs, and desires.

This phenomenon is strikingly evident in recent studies on human-AI interactions. As I’ve discussed in a previous blog post (Turing’s Tricksters), people are revealing far more personal information to AI chatbots than one might expect. Users often disclose intimate details, seek emotional support, or share vulnerabilities with these AI systems, despite knowing they’re conversing with software.

This is the paradox we face: AI will continue to capture us in its web, even as we strive to maintain our critical faculties. But it’s crucial to remember that this web is, in large part, of our own creation. It’s a reflection of our innate tendencies, our cognitive biases, and our deeply human need to connect and find meaning. It is important to hold these two realities in balance.

However, I worry that despite our best efforts, we may not be able to fully resist this pull. Our tendency to anthropomorphize and our default state of belief are deeply rooted in our evolutionary history. They cannot be changed at the flick of a switch. Just as we can’t help but see faces in clouds or flinch at sudden movements on a movie screen, we may find ourselves inexorably drawn into emotional engagement with AI, even as our rational minds protest.

A few randomly selected blog posts…

Failure has to be an option

I just read this great interview with Diane Ravitch on Slate.com (The wrong stuff). Diane Ravitch started out under George H.W. Bush as a strong supporter for NCLB (and all that goes with it, educational testing, school choice, charter schools etc. etc. etc.)....

The Avengers, Creativity & the EdTech Midgame

The Avengers, Creativity & the EdTech Midgame

If last week we had Bollywood, could Hollywood be far behind? Here is the fourth blog post from students in my class on Human Creativity x AI in Education, documenting what we do each week. The only edit I made to their post was including the image and description of...

Finding myself in EduPunk

Matt Koehler introduce me to the idea of edupunk. As this Chronicle story (Frustrated With Corporate Course-Management Systems, Some Professors Go 'Edupunk') says, Edupunk seems to be a reaction against the rise of course-managements systems, which offer cookie-cutter...

SITE 2010, symposium on TPACK

I just got back from an extended trip to California (San Jose and San Diego). I will be posting a lot more about this trip but for now here are the slides from a symposium on "Strategies for teacher professional development of TPACK" organized by Joke Voogt of Twente...

A brief history…

... um... pretty much everything, rendered as a 2100 page-long flipbook. [youtube]http://www.youtube.com/watch?v=gNYZH9kuaYM&feature=player_embedded[/youtube]

TPACK & Microsoft’s Teacher Education Initiative

Over the past year or so I have been part of an exciting project conducted as part of Microsoft's Partner's in Learning project - specifically the team focusing on Higher Education. This is a project initiated by Microsoft "aimed at helping educators and students...

Google ranking, a self defeating approach

Matt Koehler has an interesting post (Keeping track of the Koehlers) about his attempts to rise in Google's rankings for searches on his last name. In the last few months he seems to have had some success judging that he has moved from page 25 to somewhere in the 3-4...

Teaching TPACK @ BYU

I just found out about IPT287: Instructional Technology for ElEd and ECE a course taught at Brigham Young by Charles Graham (an active TPACK researcher and the adviser of Suzy Cox about whose dissertation I had written about here). Of particular interest to me was a...

Presentation/Workshop at Twente

I just completed a presentation at the symposium organized by the Department of Curriculum Design & Educational Innovation, University of Twente. Later this afternoon I will be conducting a workshop on creativity and the TPACK framework. The slides for both the...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *