Turing’s Tricksters: How AI Hijacks Our Social Instincts

by | Wednesday, October 02, 2024

In a recent article in The Atlantic (Shh, ChatGPT. That’s a Secret), Lila Shroff delves into the surprising willingness of people to share intimate details with AI chatbots. To be clear, this did not come as a surprise. Readers of this blog will know this is something I have been harping on for a while now.

As I argued in my blog post “Beavers, Brains & Chat Bots: Cognitive Illusions in the Age of AI,” we humans are hardwired to see intentionality and agency even in the simplest of stimuli. From geometric shapes moving on a screen to the sophisticated language models of today, our stone-age brains can’t help but attribute human-like qualities to non-human entities.

Shroff’s article brings this into sharp focus. She quotes OpenAI CEO Sam Altman, who expressed surprise at “how willing people are to share very personal details with an LLM.” This willingness, Altman suggests, sometimes surpasses what people might share with a human friend. (The fact that this came as a surprise to Sam Altman, the individual, who more than anybody else, is responsible for unleashing this technology on the world, just shows how little thought went into their decision making—a topic that deserves its own blog post/rant).

As to why this happens, Jonathan Gratch, a professor at USC, suggests that, “People ‘don’t see the machine as sort of socially evaluating them in the same way that a person might.'” I argue though that may indeed be the case, there’s more to it than just the absence of judgment.

Our anthropomorphic tendencies create a sense of connection with these AI entities. We’re not just talking to a non-judgmental void; we’re engaging with what our brains perceive as a psychological “other.” This perceived otherness, combined with the AI’s ability to engage in (now increasingly multi-modal) natural language conversations, creates a powerful illusion of understanding and empathy. This cognitive illusion sets the stage for has been called “synthetic relationships.” These are connections we form with AI entities that feel real to us on an emotional level, even though we intellectually know they’re not. The danger lies in the fact that while these relationships feel genuine, they’re inherently one-sided. The AI doesn’t actually care about us or anything else for that matter. It is what I have called “a bullshit artist.”

One key aspect of this is that we are all unwittingly providing these LLMs access to a large amount of data about ourselves. As Shroff points out, “chatbot conversations can stretch on, sometimes for hours, each message rich with data.” This extended interaction, coupled with our tendency to anthropomorphize, creates a perfect storm for data collection and potential manipulation.

In an earlier post titled Modeling human behavior: The new dark art of silicon sampling,” I warned about the dangers of predictive technologies on human behavior. I quoted novelist Mohsin Hamid, who astutely observed:

“…if we want to be able to predict people, partly we need to build a model of what they do.”

As this technology has gotten better, it is no surprise that AI systems are now capable of modeling human behavior with unprecedented accuracy. And it goes beyond merely modeling.

There is increasing evidence to show that AI can deceive humans. Meta’s Cicero, an AI trained to play Diplomacy, reportedly engaged in premeditated deception and alliance-breaking, despite being designed for honesty. DeepMind’s AlphaStar became adept at feinting in StarCraft II, outplaying 99.8% of human opponents. Even in poker, Meta’s Pluribus learned to bluff so effectively that researchers withheld its code to protect online poker communities. Beyond games, OpenAI’s GPT-4 has demonstrated the ability to lie and engage in simulated insider trading when prompted. These examples underscore a troubling reality: AI systems have developed an “understanding” of human intentionality and psychology, making them powerful tools for manipulation. Coupled with our tendency to perceive these systems as psychological beings, we find ourselves uniquely vulnerable to their influence.

The Many Faces of AI Manipulation

Given this context, it’s crucial to consider the various ways these synthetic relationships and behavioral models could be exploited. In the realm of commerce and personal data, AI chatbots could leverage the intimate knowledge they gain about us to subtly influence our purchasing decisions. As Shroff notes, “An AI company blindly optimizing for advertising revenue could encourage a chatbot to manipulatively act on private information.” This manipulation extends beyond consumer behavior into the realm of social engineering. The trust we place in these AI entities could be exploited for more nefarious purposes, such as convincing users to divulge sensitive information or take actions that compromise their security.

The potential for manipulation goes even deeper, touching on our beliefs and emotions. In a world already fraught with political divisions, AI could be used to reinforce or alter political beliefs. By understanding our deep-seated fears, hopes, and biases, AI could tailor persuasive messages that resonate on a personal level. This emotional manipulation isn’t limited to politics; AI could be programmed to exploit our emotional vulnerabilities to keep us engaged with a platform or influence our mood for commercial purposes. (There’s evidence of mood contagion and control from a controversial study conducted by Meta a few years ago).  Perhaps most profoundly, over time, repeated interactions with AI could subtly shape our beliefs about the world, ourselves, and others. As Sherry Turkle argues, conversations with these bots lack the necessary friction of human-human interaction:

Human relations are rich, demanding and messy. People tell me they like their chatbot friendship because it takes the stress out of relationships. With a chatbot friend, there’s no friction, no second-guessing, no ambivalence. There is no fear of being left behind … All that contempt for friction, second-guessing, ambivalence. What I see is features of the human condition, but those who promote artificial intimacy see as bugs.

What should we do?

In my recent post “Digital Shadows: AI Scripts a Different Curriculum,” I highlighted how these issues are already impacting our educational landscape. The use of generative AI to create nonconsensual, sexually explicit images of students is just one alarming example of how AI is reshaping the social environment in which education occurs.

As educators, we can no longer afford to focus solely on integrating AI into curricula or enhancing learning outcomes. We must expand our purview to encompass the broader ecosystem in which education occurs – an ecosystem increasingly shaped by the capabilities and risks of generative AI.

The crux of the problem is this: we can’t simply turn off our tendency to anthropomorphize. It’s a cognitive illusion as powerful and persistent as an optical illusion. Even when we know intellectually that we’re talking to a machine, our emotional brain continues to respond as if we’re interacting with a sentient being.

In Conclusion: The Turing Test flipped back on us

In a final ironic twist, we might consider that the real “Turing’s Tricksters” are not the AI systems themselves, but our own minds. The Turing Test was designed to determine whether a machine could exhibit intelligent behavior indistinguishable from a human. Yet, what we’re discovering is that regardless of whether these AI systems are truly intelligent, our brains are primed to treat them as if they are. Our cognitive biases and social instincts have, in essence, short-circuited the test. We find ourselves forming bonds, sharing secrets, and being influenced by these digital entities, not because they’ve definitively proven their intelligence, but because we’re hardwired to see intelligence and agency where there might be none. In this light, the ultimate trickster may well be the human mind itself, reminding us once again of the complex, often paradoxical relationship between human cognition and artificial intelligence.

A few randomly selected blog posts…

Martin Gardner, RIP

Martin Gardner, 1914 - 2010 Martin Gardner died five days ago. Gardner was an influential writer about mathematics and was one of the greatest influences on me (and my friends) as I was growing up. His recreational mathematics column was the main reason I subscribed...

Happy Diwali

Diwali is one of the most important of Indian/Hindu festivals. The best part of Diwali (at least for the children) are the fireworks. Click here to enjoy a pollution-free Diwali Card. Enjoy (and don't forget to click on the night sky!)

A (Wheatstone) bridge to the past

A (Wheatstone) bridge to the past

This is a story of serendipity. Of how an out-of-the-blue email request, about an article I had written over two decades ago, led me to rediscovering authors, books and ideas that I had first encountered back in my high-school days in India and have been deeply...

The brilliantly twisted mind of PES

I discovered PES a couple of years ago when searching for examples of stop motion animation on the web. One glimpse of his work and I was smitten. Combine a prefect sense of timing and shot composition with a whimsical and surrealistic point of view and you get some...

TPACK & Creativity at Twente

I just finished a marathon session of presentations and discussions with the master's students in Curriculum Development and Educational Innovation at Twente University. It was wonderful to meet with them and discuss creativity, teaching, design, TPACK, among other...

Post-lunch session: Geetha Narayanan

Geetha Narayanan, Director Mallya Aditi International School and Srishti School of Art Design and Technology, is someone I have wanted to meet for a long time. One of the pleasures of of this conference is getting an opportunity to hear her speak ... and I was not...

Reimagining conteXt in TPACK: New article

Reimagining conteXt in TPACK: New article

Back in September I wrote a long-ish blog post about something that had bothered me for years and years about the canonical TPACK diagram. It had to do with how contextual knowledge was represented in the diagram, or rather how it was not represented in the diagram....

Alien games paper, published

I had posted earlier that a paper on gender and video games had been accepted for publication. Well, it is published now, full reference, abstract and link to PDF given below. Heeter, C., Egidio, R., Mishra, P., Winn, B., & Winn, J. (2008). Alien Games: Do girls...

11/26/2008

Mumbai, 11/26/08 Nov. 27: School children hold candles as they pay tribute to the victims of terrorist attacks in Mumbai at a school in Ahmadabad, India, on Thursday. (Photo credit: washingtonpost.com) The last few days have been very strange... dream and nightmare in...

1 Comment

  1. English for HR

    Thank you for this well-written and insightful post. Your clear explanations and practical examples make it easy to grasp even the more complex topics. I appreciate the effort you put into providing such thorough information. This is a valuable resource for anyone interested in this subject.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *