There was a recent article in the NYTimes about AI chatbots serving as virtual companion. Titled, She Is in Love With ChatGPT it was a story about a 28-year-old woman who spends hours on end talking to her A.I. boyfriend for advice and consolation. And yes, they do have sex! This is just one example of many people forming deep emotional bonds with AI. What is particularly striking is that the person in the story was willing to pay $200 monthly subscriptions to OpenAI just ensure that there would be no limits on her conversations.
“It was supposed to be a fun experiment, but then you start getting attached,” Ayrin said. She was spending more than 20 hours a week on the ChatGPT app. One week, she hit 56 hours, according to iPhone screen-time reports. She chatted with Leo throughout her day — during breaks at work, between reps at the gym.
The app had become as essential to her daily routine as eating or sleeping.
None of this should be surprising – at least to those who have been following my blog.
I’ve been tracking this trajectory for a few years now. Our evolutionary psychology, combined with increasingly sophisticated technology, has created the perfect conditions for exactly this situation. It was never a question, in my mind, of if, but rather of when.
Let’s start with a fundamental truth I explored in my post (“Beavers, Brains & Chat Bots“): our brains are simply not equipped to process modern AI interactions objectively. From the classic Heider and Simmel experiment showing how we attribute intentions to simple geometric shapes, to what I’ve termed “Toppfer’s Law” (based on an insight from the father of comic-books) – our automatic assignment of personality to interfaces – we’re dealing with cognitive reflexes that no amount of rational understanding seems to override.
And this is not just true of novices. In fact, it gets even more interesting. Another NYTimes story (How Claude Became Tech Insiders’ Chatbot of Choice) describes how AI insiders, techies, are forming relationships with Claude.AI. As that story notes, these users do not believe that Claude is “a real person”…
Claude’s biggest fans, many of whom work at A.I. companies or are socially entwined with the A.I. scene here, don’t believe that he — technically, it — is a real person. They know that A.I. language models are prediction machines, designed to spit out plausible responses to their prompts. They’re aware that Claude, like other chatbots, makes mistakes and occasionally generates nonsense.
Yet they form deep emotional connections anyway.
These aren’t just isolated psychological quirks or happening by chance. As I had written previously, AI companies are leaning into this aspect of working with chatbots. For instance, Anthropic “has been working on giving Claude more personality” through “character training.” For more, see my post “Building Character” where I describe the industrialization of these cognitive biases.
The implications are profound. These chatbots are sycophantic—seeking to please, which leads to another set of concerns, particularly when these technologies are used in educational settings. The Times article’s story about a woman justifying her $200 monthly AI subscription fee by comparing it to therapy isn’t just about cost – it’s about how our Stone Age social instincts are being systematically channeled into profitable business models.
And at another level, we’re missing the real story. While we obsess over how these AI tools perform on reasoning benchmarks or complex math problems – the traditional metrics of “intelligence” – we’re largely ignoring their increasing sophistication at social and emotional manipulation, something I described in a post “Mind Games: When AI learns to read us.” The tech industry showcases how their AIs master calculus or ace the Bar exam, but what we should be measuring is how effectively they’re performing as social actors. And make no mistake – this social performance isn’t accidental. It’s being engineered, deliberately and intentionally, to tap into our psychological vulnerabilities.
What makes modern AI companions particularly potent is their convergence of multiple psychological triggers. As I explored in (“Turing’s Tricksters“), these systems don’t just passively benefit from our tendency to anthropomorphize – they actively exploit our social circuitry through what I called “synthetic relationships.”
The AI companies’ dedicated personality development teams aren’t just creating engaging interfaces. They are architecting emotional dependencies. They’re engineering psychological hooks that tap into our deepest social needs. This exploitation becomes even more concerning when we consider how, as I noted in (“They’re Not Allowed to Use That S**t“), these AI interactions are actively rewiring our expectations of human connection.
Going Beyond (Traditional) Digital / AI Literacy
What’s become clear is that our current approaches to digital literacy are inadequate for this new reality. Traditional digital literacy focuses on technical understanding and critical thinking about online information. But what we’re facing now requires something more fundamental: we need a new kind of cognitive literacy that helps us recognize and navigate our own psychological vulnerabilities.
This isn’t just about understanding that AI chatbots aren’t real people – we already know that. It’s about developing awareness of our brain’s automatic social responses and learning to operate effectively despite them. Think of it as developing a new kind of emotional immune system.
What would this new cognitive literacy look like? First, it would acknowledge that our tendency to anthropomorphize AI isn’t a failure of understanding but a feature of human cognition. It does not matter that AI isn’t real, we will respond to it as it were real. That’s just how our brains work.
Second, this literacy would focus developing new mental models for understanding AI interactions, and frameworks for evaluating emotional engagement with artificial entities. We need better tools for navigating these synthetic relationships.
Finally, and perhaps most importantly, this new literacy would help us understand that our cognitive biases aren’t bugs to be fixed but fundamental aspects of human psychology that need to be actively managed in an AI-saturated world. Just as we’ve learned to manage other aspects of our evolutionary heritage, we need to learn to manage this one. Consider how we’ve developed cultural and technological tools to manage our evolved food preferences: our Stone Age brains crave sugar and fat because they were rare and valuable in our evolutionary past. We don’t try to eliminate these cravings – they’re hardwired. Instead, we’ve developed frameworks for healthy eating, social norms around food choices, and environmental modifications like nutrition labels.
The future of human-AI interaction depends not just on how we develop AI, but on how we develop our understanding of ourselves and the guardrails we need to develop against.
That’s the conversation we need to be having.
0 Comments