Hardwired for Connection: Why We Fall for AI Companions (And What To Do About It)

by | Thursday, January 16, 2025

There was a recent article in the NYTimes about AI chatbots serving as virtual companion. Titled, She Is in Love With ChatGPT it was a story about a 28-year-old woman who spends hours on end talking to her A.I. boyfriend for advice and consolation. And yes, they do have sex! This is just one example of many people forming deep emotional bonds with AI. What is particularly striking is that the person in the story was willing to pay $200 monthly subscriptions to OpenAI just ensure that there would be no limits on her conversations.

“It was supposed to be a fun experiment, but then you start getting attached,” Ayrin said. She was spending more than 20 hours a week on the ChatGPT app. One week, she hit 56 hours, according to iPhone screen-time reports. She chatted with Leo throughout her day — during breaks at work, between reps at the gym.

The app had become as essential to her daily routine as eating or sleeping.

None of this should be surprising – at least to those who have been following my blog.  

I’ve been tracking this trajectory for a few years now. Our evolutionary psychology, combined with increasingly sophisticated technology, has created the perfect conditions for exactly this situation. It was never a question, in my mind, of if, but rather of when.

Let’s start with a fundamental truth I explored in my post (“Beavers, Brains & Chat Bots“): our brains are simply not equipped to process modern AI interactions objectively. From the classic Heider and Simmel experiment showing how we attribute intentions to simple geometric shapes, to what I’ve termed “Toppfer’s Law” (based on an insight from the father of comic-books) – our automatic assignment of personality to interfaces – we’re dealing with cognitive reflexes that no amount of rational understanding seems to override.

And this is not just true of novices. In fact, it gets even more interesting. Another NYTimes story (How Claude Became Tech Insiders’ Chatbot of Choice) describes how AI insiders, techies, are forming relationships with Claude.AI. As that story notes, these users do not believe that Claude is “a real person”…  

Claude’s biggest fans, many of whom work at A.I. companies or are socially entwined with the A.I. scene here, don’t believe that he — technically, it — is a real person. They know that A.I. language models are prediction machines, designed to spit out plausible responses to their prompts. They’re aware that Claude, like other chatbots, makes mistakes and occasionally generates nonsense.

Yet they form deep emotional connections anyway.

These aren’t just isolated psychological quirks or happening by chance. As I had written previously, AI companies are leaning into this aspect of working with chatbots. For instance, Anthropic “has been working on giving Claude more personality” through “character training.” For more, see my post “Building Character” where I describe the industrialization of these cognitive biases.

The implications are profound. These chatbots are sycophantic—seeking to please, which leads to another set of concerns, particularly when these technologies are used in educational settings. The Times article’s story about a woman justifying her $200 monthly AI subscription fee by comparing it to therapy isn’t just about cost – it’s about how our Stone Age social instincts are being systematically channeled into profitable business models.

And at another level, we’re missing the real story. While we obsess over how these AI tools perform on reasoning benchmarks or complex math problems – the traditional metrics of “intelligence” – we’re largely ignoring their increasing sophistication at social and emotional manipulation, something I described in a post “Mind Games: When AI learns to read us.” The tech industry showcases how their AIs master calculus or ace the Bar exam, but what we should be measuring is how effectively they’re performing as social actors. And make no mistake – this social performance isn’t accidental. It’s being engineered, deliberately and intentionally, to tap into our psychological vulnerabilities.

What makes modern AI companions particularly potent is their convergence of multiple psychological triggers. As I explored in (“Turing’s Tricksters“), these systems don’t just passively benefit from our tendency to anthropomorphize – they actively exploit our social circuitry through what I called “synthetic relationships.”

The AI companies’ dedicated personality development teams aren’t just creating engaging interfaces. They are architecting emotional dependencies. They’re engineering psychological hooks that tap into our deepest social needs. This exploitation becomes even more concerning when we consider how, as I noted in (“They’re Not Allowed to Use That S**t“), these AI interactions are actively rewiring our expectations of human connection.

Going Beyond (Traditional) Digital / AI Literacy

What’s become clear is that our current approaches to digital literacy are inadequate for this new reality. Traditional digital literacy focuses on technical understanding and critical thinking about online information. But what we’re facing now requires something more fundamental: we need a new kind of cognitive literacy that helps us recognize and navigate our own psychological vulnerabilities.

This isn’t just about understanding that AI chatbots aren’t real people – we already know that. It’s about developing awareness of our brain’s automatic social responses and learning to operate effectively despite them. Think of it as developing a new kind of emotional immune system.

What would this new cognitive literacy look like? First, it would acknowledge that our tendency to anthropomorphize AI isn’t a failure of understanding but a feature of human cognition. It does not matter that AI isn’t real, we will respond to it as it were real. That’s just how our brains work.

Second, this literacy would focus developing new mental models for understanding AI interactions, and frameworks for evaluating emotional engagement with artificial entities. We need better tools for navigating these synthetic relationships.

Finally, and perhaps most importantly, this new literacy would help us understand that our cognitive biases aren’t bugs to be fixed but fundamental aspects of human psychology that need to be actively managed in an AI-saturated world. Just as we’ve learned to manage other aspects of our evolutionary heritage, we need to learn to manage this one. Consider how we’ve developed cultural and technological tools to manage our evolved food preferences: our Stone Age brains crave sugar and fat because they were rare and valuable in our evolutionary past. We don’t try to eliminate these cravings – they’re hardwired. Instead, we’ve developed frameworks for healthy eating, social norms around food choices, and environmental modifications like nutrition labels.

The future of human-AI interaction depends not just on how we develop AI, but on how we develop our understanding of ourselves and the guardrails we need to develop against.

That’s the conversation we need to be having.

A few randomly selected blog posts…

Welcome…

...to my new website. It has taken a while, but it is finally here. Of course, as in all things web, this is still a work in progress, but it is getting there. I will be phasing out my old site gradually. The most significant change has been a shift from static HTML...

Learning futures: Designing the horizon

Learning futures: Designing the horizon

I was recently invited (along with Sean Leahy and Jodie Donner) to present at the Winter Games, Digital Immersive Experience organized by ShapingEDU at Arizona State University. Our talk was titled Learning Futures: Designing the Horizon. We described our session as...

Generative AI in Education: Keynote at UofM-Flint

Generative AI in Education: Keynote at UofM-Flint

A couple of weeks ago I was invited to give a keynote at the Frances Willson Thompson Critical Issues Conference on Generative AI in Education. It was great to go back to Michigan even if for a super short trip. One of the pleasures of the visit was catching up with...

New ambigram logo for ideaplay.org

I had written previously about a blog started by students in our Educational Psychology and Educational Technology Ph.D. program (ideaplay.org) and had designed a couple of ambigrammatic logos for them. You can see the original post here. Here is one of the original...

Fortunate

I had discovered the amazing poet Szymborska (on this very blog a while ago). And then today in my mailbox was another poem by her, sent in by a friend. We're extremely fortunate A poem by Wislawa Szymborska We're extremely fortunate not to know percisely the kind of...

Principled innovation in hiring

Principled innovation in hiring

We, in the Office of Scholarship and Innovation (OofSI), have never been big fans of the typical interview and hiring process. We are not sure that the process helps us identify the right people, and more importantly, we find the process to be unnecessarily opaque and...

stealth assessment

Just heard this of stealth assessment idea (from Michael Spector at NTLS) that struck a chord. More here, [word document].

TPACK & Social Media at Bloomfield Hills

I spent a two days a couple of weeks ago with the faculty and leadership of Bloomfield Hills School District. The first day was a workshop on teaching, technology and creativity with the faculty of Model High School and Bowers Academy. Leigh and I had been invited...

Robert Frost writes a paper

First it was Lewis Carroll and Jabberwocky and now it is Robert Frost and his poem Stopping by the woods on a snowy evening that receives the EPET treatment. Here is poem #2 in our series of famous poems rewritten from a graduate school perspective. Thanks to Diana...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *