Turing’s Tricksters: How AI Hijacks Our Social Instincts

by | Wednesday, October 02, 2024

In a recent article in The Atlantic (Shh, ChatGPT. That’s a Secret), Lila Shroff delves into the surprising willingness of people to share intimate details with AI chatbots. To be clear, this did not come as a surprise. Readers of this blog will know this is something I have been harping on for a while now.

As I argued in my blog post “Beavers, Brains & Chat Bots: Cognitive Illusions in the Age of AI,” we humans are hardwired to see intentionality and agency even in the simplest of stimuli. From geometric shapes moving on a screen to the sophisticated language models of today, our stone-age brains can’t help but attribute human-like qualities to non-human entities.

Shroff’s article brings this into sharp focus. She quotes OpenAI CEO Sam Altman, who expressed surprise at “how willing people are to share very personal details with an LLM.” This willingness, Altman suggests, sometimes surpasses what people might share with a human friend. (The fact that this came as a surprise to Sam Altman, the individual, who more than anybody else, is responsible for unleashing this technology on the world, just shows how little thought went into their decision making—a topic that deserves its own blog post/rant).

As to why this happens, Jonathan Gratch, a professor at USC, suggests that, “People ‘don’t see the machine as sort of socially evaluating them in the same way that a person might.'” I argue though that may indeed be the case, there’s more to it than just the absence of judgment.

Our anthropomorphic tendencies create a sense of connection with these AI entities. We’re not just talking to a non-judgmental void; we’re engaging with what our brains perceive as a psychological “other.” This perceived otherness, combined with the AI’s ability to engage in (now increasingly multi-modal) natural language conversations, creates a powerful illusion of understanding and empathy. This cognitive illusion sets the stage for has been called “synthetic relationships.” These are connections we form with AI entities that feel real to us on an emotional level, even though we intellectually know they’re not. The danger lies in the fact that while these relationships feel genuine, they’re inherently one-sided. The AI doesn’t actually care about us or anything else for that matter. It is what I have called “a bullshit artist.”

One key aspect of this is that we are all unwittingly providing these LLMs access to a large amount of data about ourselves. As Shroff points out, “chatbot conversations can stretch on, sometimes for hours, each message rich with data.” This extended interaction, coupled with our tendency to anthropomorphize, creates a perfect storm for data collection and potential manipulation.

In an earlier post titled Modeling human behavior: The new dark art of silicon sampling,” I warned about the dangers of predictive technologies on human behavior. I quoted novelist Mohsin Hamid, who astutely observed:

“…if we want to be able to predict people, partly we need to build a model of what they do.”

As this technology has gotten better, it is no surprise that AI systems are now capable of modeling human behavior with unprecedented accuracy. And it goes beyond merely modeling.

There is increasing evidence to show that AI can deceive humans. Meta’s Cicero, an AI trained to play Diplomacy, reportedly engaged in premeditated deception and alliance-breaking, despite being designed for honesty. DeepMind’s AlphaStar became adept at feinting in StarCraft II, outplaying 99.8% of human opponents. Even in poker, Meta’s Pluribus learned to bluff so effectively that researchers withheld its code to protect online poker communities. Beyond games, OpenAI’s GPT-4 has demonstrated the ability to lie and engage in simulated insider trading when prompted. These examples underscore a troubling reality: AI systems have developed an “understanding” of human intentionality and psychology, making them powerful tools for manipulation. Coupled with our tendency to perceive these systems as psychological beings, we find ourselves uniquely vulnerable to their influence.

The Many Faces of AI Manipulation

Given this context, it’s crucial to consider the various ways these synthetic relationships and behavioral models could be exploited. In the realm of commerce and personal data, AI chatbots could leverage the intimate knowledge they gain about us to subtly influence our purchasing decisions. As Shroff notes, “An AI company blindly optimizing for advertising revenue could encourage a chatbot to manipulatively act on private information.” This manipulation extends beyond consumer behavior into the realm of social engineering. The trust we place in these AI entities could be exploited for more nefarious purposes, such as convincing users to divulge sensitive information or take actions that compromise their security.

The potential for manipulation goes even deeper, touching on our beliefs and emotions. In a world already fraught with political divisions, AI could be used to reinforce or alter political beliefs. By understanding our deep-seated fears, hopes, and biases, AI could tailor persuasive messages that resonate on a personal level. This emotional manipulation isn’t limited to politics; AI could be programmed to exploit our emotional vulnerabilities to keep us engaged with a platform or influence our mood for commercial purposes. (There’s evidence of mood contagion and control from a controversial study conducted by Meta a few years ago).  Perhaps most profoundly, over time, repeated interactions with AI could subtly shape our beliefs about the world, ourselves, and others. As Sherry Turkle argues, conversations with these bots lack the necessary friction of human-human interaction:

Human relations are rich, demanding and messy. People tell me they like their chatbot friendship because it takes the stress out of relationships. With a chatbot friend, there’s no friction, no second-guessing, no ambivalence. There is no fear of being left behind … All that contempt for friction, second-guessing, ambivalence. What I see is features of the human condition, but those who promote artificial intimacy see as bugs.

What should we do?

In my recent post “Digital Shadows: AI Scripts a Different Curriculum,” I highlighted how these issues are already impacting our educational landscape. The use of generative AI to create nonconsensual, sexually explicit images of students is just one alarming example of how AI is reshaping the social environment in which education occurs.

As educators, we can no longer afford to focus solely on integrating AI into curricula or enhancing learning outcomes. We must expand our purview to encompass the broader ecosystem in which education occurs – an ecosystem increasingly shaped by the capabilities and risks of generative AI.

The crux of the problem is this: we can’t simply turn off our tendency to anthropomorphize. It’s a cognitive illusion as powerful and persistent as an optical illusion. Even when we know intellectually that we’re talking to a machine, our emotional brain continues to respond as if we’re interacting with a sentient being.

In Conclusion: The Turing Test flipped back on us

In a final ironic twist, we might consider that the real “Turing’s Tricksters” are not the AI systems themselves, but our own minds. The Turing Test was designed to determine whether a machine could exhibit intelligent behavior indistinguishable from a human. Yet, what we’re discovering is that regardless of whether these AI systems are truly intelligent, our brains are primed to treat them as if they are. Our cognitive biases and social instincts have, in essence, short-circuited the test. We find ourselves forming bonds, sharing secrets, and being influenced by these digital entities, not because they’ve definitively proven their intelligence, but because we’re hardwired to see intelligence and agency where there might be none. In this light, the ultimate trickster may well be the human mind itself, reminding us once again of the complex, often paradoxical relationship between human cognition and artificial intelligence.

A few randomly selected blog posts…

2001, 40 years after

Musings on local newspaper headlines, 2001 A Space Odyssey, media and creativity, and ending with some thoughts on the meaning of life... a lot to fit into one blog post but again I had the weekend to work on this. I read our local newspaper, the Lansing State Journal...

Online physics-based games

Physics Games - online physics-based games. Some cool stuff here. For instance check out Demolition City Online Physics Games

A sad day…

... for Mumbai, for India, and for the world!

Obama’s gmail account

Did you know that any email sent to barackobama@gmail.com goes to an Indian software developer! Strange but true!

Why creativity, technology and education don’t play well together?

Why creativity, technology and education don’t play well together?

What is the relationship between technology and creativity, particularly in educational contexts? In this article, we provide a critical thematic literature review of existing scholarship at the intersection of creativity, technology, and teaching/learning in...

AI Literacy: The Ethical Overreach?

AI Literacy: The Ethical Overreach?

A few weeks ago, I wrote a tongue-in-cheek blog post about the need for "Pencil Literacy," defined as "A Framework for the ethical, equitable & meaningful integration of transformative graphite technology." While that post was entertaining to write, it didn't...

TE150 presentation in streaming video

Terri Gustafson has created streaming downloads of the presentation recently made by the TE150 (Reflections on Teaching, Reflections on Learning) Online team (see this for more context). The video can be seen in two parts: Part I: The presentation "Reflections on...

A surprise gift

I just received a gift in the mail. It was a box and in the box was One of those cool push pin toys... How cool is that! In the box was a short note that went: Hi Dr. Punya! It was a pleasure to meet you during the Quest Alliance Seminar in Bangalore. I really enjoyed...

Christine Greenhow visit + new ambigram

Christine Greenhow from the University of Maryland visited the College of Education this past week. She gave a talk and met with various faculty members and graduate students. I had met Christine a couple of years ago when we had both been invited to the National...

1 Comment

  1. English for HR

    Thank you for this well-written and insightful post. Your clear explanations and practical examples make it easy to grasp even the more complex topics. I appreciate the effort you put into providing such thorough information. This is a valuable resource for anyone interested in this subject.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *