The ELIZA Effect-ion

by | Sunday, December 01, 2024

NOTE: This is a cross post with the Civics of Technology blog.

I first read about the “ELIZA Effect” as a high-school student in India, in Douglas Hofstadter’s classic rumination on art, music, humanity and AI—Gödel, Escher, Bach: An Eternal Golden Braid.. The eponymous effect came from ELIZA, an early chatbot created by Joseph Weizenbaum, programmed to mimic a Rogerian psychotherapist. It was a simple program, usually just parroting your comments back at you in the form of a question. Yet, Weizenbaum found, to his initial surprise and then distress, that people often responded to the program as if it were human, forming emotional attachments, at times, even though they knew that it was just an unsophisticated parrot following simple rules. This concerned him so much that he shifted his research focus from technical development to focus on warning others about the dangers of attributing human-like capabilities to machines.

What did it say about us, I wondered, that we were so quick to anthropomorphize a computer program? Decades later, as an Assistant Professor at Michigan State University, I revisited these ideas through a series of experimental studies that looked at how people’s psychological responses to interactive media. In one study I looked at how they responded to praise and blame from a computer tutor (they preferred praise in every case, just in case you are curious), or how children play with and engage with robotic toys. Then life shifted, as it does, and I moved on to other things, though an interest in these issues remained.

It is not surprising, therefore, that I was “primed” in some manner to think of these matters when these Large Language Model-based chatbots erupted into the scene. These bots, with their ability to hold a conversation in natural language, using words that indicate intent, agency and even affect, could possibly lead to the ELIZA Effect on steroids.

Over the past few years, I have written quite a bit about these ideas, mostly as blog posts on my personal website, focusing on what happens to us (individually and collectively) as agentic versions of AI become more sophisticated and pervasive. These writings approach this idea from a variety of perspectives, seeking to understand the psychological, social, and ethical implications of our interactions with AI. I wonder about how these technologies are reshaping our understanding of intelligence, consciousness, and human connection.

In this blog post for Civics of Tech, I hope to share some of how I have been approaching these ideas by providing connective tissue between some of my blog posts, that you can dig into if you are interested in going deeper.

One of the first pieces I wrote was back in 2022, before even ChatGPT hit our collective consciousness. This post was prompted by the news of Blake Lemoine being fired from Google for claiming that LaMDA (a large language model) was sentient. I took the question ‘Can a Computer Program Be Sentient? and argued that it was not as much about whether the program had achieved sentience as much as it was about our ability to think that it had. I grounded my thoughts in my previous research into this topic, managing to make an interesting connection to Rodolphe Topffer, the father of the modern comic book. Intriguing? Well you will have to read the full post to learn more.

As I argued, such attributions are not new. We have always believed unreal things—from paintings to books, from movies to video games. Humans have the amazing ability to believe and ascribe meaning to the most random of phenomena. This perspective, goes contrary to how most media theory approaches this idea, where the standard trope is that of “willing suspension of disbelief.” The idea is we consciously choose to “suspend our disbelief” in unreal things—namely media representations such as stories, paintings, films, and video games. As I wrote in the post (Willing suspension of belief: The paradox of human AI interaction):  

But what if we’ve got it backwards?

What if our default state isn’t disbelief, but belief? Being critical and questioning isn’t our natural mode – it’s hard cognitive labor.

I argued, based on Kahneman’s idea of thinking fast and slow, that our brain’s path of least resistance is to believe. This is what makes all forms of art possible, from sketches to oil paintings, from animated films to true crime podcasts. And this is also part of the reason why we will, whether we like it or not, fall for these agentic AI systems. It is too much cognitive labor not to, in fact I argue (in Beavers, Brains & Chat Bots: Cognitive Illusions in the Age of AI that we may be evolutionarily primed to. 

This cognitive dissonance, where we engage with AI as if it were a “psychological other,” has some interesting consequences that extend far beyond mere curiosity. This includes finding ourselves emotionally invested in interactions with chatbots and digital assistants, despite knowing they lack true consciousness. As AI systems become increasingly sophisticated in mimicking human behavior and thought patterns, they begin to exploit our social instincts and cognitive biases in unprecedented ways.

This in some ways make these technologies flip the Turing Test—making them, what I have called, “Turing’s Tricksters” hijacking our innate tendencies to connect and find meaning. This potential manipulation can lead us to overshare personal information or seek emotional support from non-sentient entities. This vulnerability is further compounded by AI’s ability to learn and adapt, creating a feedback loop where it becomes increasingly adept at telling us what we want to hear, forming a kind of “honey trap” as I discuss in my post “AI’s Honey Trap: Why AI Tells Us What We Want to Hear“.

And finally, I argue that this is not happening just by chance and due to our innate predilections. It is being actively pushed by AI corporations because they see this as a powerful way to engage, control and manipulate us. I dig into this in a couple of posts where I unpack how these companies are deliberately designing these systems to feel more like companions than mere word predicting machines (“They’re Not Allowed to Use That S**t”: AI’s Rewiring of Human Connection). 

This of course brings a whole host of ethical issues to the forefront—which led to a mini-rant about the absurd one-sidedness of the ethics in AI debate). As AI systems learn to “read” human emotions and behaviors, questions arise about privacy, manipulation, and the potential for AI to be used as a tool for social engineering. In “Mind Games: When AI Learns to Read Us,” I examine how AI might be used to build artificial “characters” that play on our emotions and exploit our social needs.

Finally, I wonder about how the widespread adoption of these agentic AI technologies mean for our personal social lives. It is conceivable that the convenience and personalization offered by AI assistants could lead to a decline in open, public online and in-person interactions, as users retreat into private, AI-mediated conversations. In “Chatting Alone: AI and the (Potential) Decline of Open Digital Spaces“, I raise concerns about the potential for increased isolation and the erosion of shared interpersonal experiences.

While these theoretical and conceptual explorations are fun there are also my personal experiments with these Chatbots, that provide another perspective on these issues.

In one post (Kern You Believe It? A Typographical Tango with AI) I describe (actually let Claude.AI describe) a series of creative experiments on creative typography we engaged in together, leading to some interesting meta-conversations about what this engagement means. In another experiment (Finding In/Sight: A Recursive Dance with AI)we got into the use of intentional language by AI and explored how the very use of language implies some form of intentionality.

What was interesting, upon reflection, was that despite my knowing, every step of the way, that I was interacting with a bullshit artist / stochastic parrot (take your pick) I was actually having a lot of fun. In short, the interaction was joyful, though clearly one-sided. As I wrote:

And even though there was no deeper truth there, I have to acknowledge that I got real pleasure from this interaction. My feelings were genuine. Claude’s consciousness was not real, its words a simulacra of human interaction. There wasn’t a there there. But truth be told, my emotions were real. The joy I felt, through this interaction, was genuine.

There’s an intriguing paradox in how we interact with AI. Consider how movies affect us – we know they’re just light and shadow playing across a screen, yet they still make us laugh, weep, and feel inspired to change our lives. Similarly with AI, even when we’re fully aware we’re engaging with sophisticated software, we can find joy and pleasure in these interactions. I never lost sight of the fact that I was not communicating with a stochastic parrot but I cannot deny it was not fun.

Looking across these essays, I find myself circling back to the ELIZA Effect and the questions it first raised for me. The most important question for me, more than whether these tools are intelligent or sentient, is the question: what do our interactions with them reveal about us—our desires, our fears, and our need for connection?

This has led me to thinking about what this means for media or AI literacy, something quite the rage these days. I mean, not a day goes by without some agency, or organization, offering their own framework! I believe many of these well-intentioned approaches miss the point. For the most part they focus on analyzing how media constructs and conveys messages, I believe this approach falls short when dealing with AI systems specifically engineered to exploit human psychology.

What we need is an integrated understanding that examines both AI technology and human psychology in tandem. This means going beyond simply learning about AI’s capabilities and constraints. We must recognize our own cognitive tendencies – why we instinctively attribute human qualities to AI, develop trust in automated systems, and form emotional connections with artificial entities despite knowing their true nature. As I wrote in Building character: When AI plays us:

True media literacy in the age of AI isn’t just about understanding the nature of these new technologies – it’s about understanding ourselves.

Topics related to this post: Essay

A few randomly selected blog posts…

The Page is a Stage: AI Debates as Academic Theater

The Page is a Stage: AI Debates as Academic Theater

Sometimes the best academic work emerges from moments of pure play. That’s certainly true for our recent paper “The Staging of AI: Exploring Perspectives About Generative AI, Creativity and Education,” which just appeared in the Journal of Interactive Media in...

From Tech to Ed Tech: Distance to the moon

For one reason or another, I have three consecutive posts regarding the earth and sun and moon - i.e. the local area in the solar system. I had just completed my previous postings (on on seeing through eclipses and measuring the radius of the earth) when I came across...

Exploring Organizational Creativity & Mindfulness with Ravi Kudesia

Exploring Organizational Creativity & Mindfulness with Ravi Kudesia

Recently our on-going series on creativity, technology and learning for the journal TechTrends has focused on the relationship between mindfulness and creativity, particularly in educational contexts. Our first article set the stage for a deeper dive into this...

Where do creative ideas come from? 2 articles

The new year begins with the publication of 2 key articles in our series Rethinking Technology and Creativity in the 21st Century. Co-authored with Danah Henriksen and the Deep-Play Research Group these two articles seek to develop a better understanding of where...

On performing one’s identity: A thought inspired by Jonathan Miller

It is difficult, in a world buffeted by change, to know what to hold on to. I often wonder about this when thinking of teaching and learning, when thinking of the speed at which technology is changing the world we live in... What do we hold on to? What do we let go?...

Martin Gardner, RIP

Martin Gardner, 1914 - 2010 Martin Gardner died five days ago. Gardner was an influential writer about mathematics and was one of the greatest influences on me (and my friends) as I was growing up. His recreational mathematics column was the main reason I subscribed...

Malik, Mishra & Shanblatt win best paper award

Qaiser Malik called me yesterday to tell me that a paper we have been working on: Malik, Q., Mishra, P., & Shanblatt, M. (2008). Identifying learning barriers for non-major engineering students in electrical engineering courses. Proceedings of the 2008 American...

Stuff Indian’s Like

After the success of Stuff white people like, can Stuff Indians like be far behind. Check it out... it has the occasional nugget that nails Indians and their behavior.

The mysterious pentagon

There are interesting patterns all around us. Here is one I found the other day. We were boiling lentils in a shallow bowl... and then, out of nowhere emerged an almost perfect pentagon! The almost perfect pentagon that showed up on the surface of the boiling lentils!...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *