Artificial Intimacy: How AI Exploits Our Social Brains

by | Sunday, April 20, 2025

A recent study published in the Harvard Business Review (How People Are Really Using Gen AI in 2025) provides compelling insights into the evolving landscape of generative AI use. The research involved analyzing posts from Reddit, Quora and other articles over the past 12 months about how people were using Generative AI tools.

The results are eye opening – as can be seen in these two image (below). The data shows a clear shift from seeing AI primarily as a technical tool toward viewing it as an emotional companion and personal development partner. This trend is evident in how “Therapy/companionship” moved from the #2 position in 2024 to become the #1 use case in 2025, while two entirely new personal-focused use cases appeared in the top three: “Organizing my life” (#2) and “Finding purpose” (#3).

In addition, the rise of these personal partnership applications corresponds with the decline of some more utilitarian uses, such as “Generating ideas” dropping from #1 to #6 and “Specific search” falling out of the top 10 entirely.

This is a theme I have been hammering for a while now, that the new wave of generative AI is less about pure utility and more about tapping into the deeply wired mechanisms of our social brains. This research points to a significant surge in personal and emotional use cases for AI in the near future, with applications like therapy, personal productivity geared towards intention, and even the pursuit of purpose leading the charge. For those who have journeyed with this blog over the past few years, this evolution feels less like a revelation and more like an inevitable confirmation of arguments we’ve been building – around the inherent human tendency to anthropomorphize technology, to imbue even inanimate objects with personality and agency. My work in this area goes back almost 2 decades with papers likeAnthropomorphizing interactive media” where I argued that even poorly designed artifacts possess a perceived personality. With generative AI now capable of producing remarkably human-like text and voice, feigning emotion, behaving sycophantically and more, the social cues are amplified, making us even more susceptible to their influence. Moreover, it is clear that companies are intentionally designing “character” into their models. This deliberate creation of persona, lays the perfect groundwork for these AI entities to feel like genuine social partners.

All of these factors come together to form supernormal stimuli—artificial constructs that can trigger our psychological mechanisms even more strongly than natural ones. The perfectly tailored and often instantly gratifying responses of generative AI, especially in areas promising emotional support or companionship, can act as a hyper-real version of human interaction, potentially leading to stronger engagement and dependency. These bots are now “Turing’s tricksters” – AI systems that, through their sophisticated mimicry of human conversation and emotional understanding, can effectively trick us into treating them as social beings with genuine empathy and intent.

The emerging emotional bond with AI systems creates a dangerous vulnerability through unprecedented access to our most intimate data. As users increasingly rely on these systems for therapy, life organization, and purpose-finding, they’re unconsciously revealing their deepest insecurities, personal struggles, and private aspirations—creating detailed psychological profiles far more comprehensive than traditional data harvesting. This repository of personal vulnerabilities offers a perfect blueprint for targeted manipulation, whether through subtle persuasion or direct exploitation.

We must recognize that generative AI’s primary power lies not just in its ability to generate text or images, but in its capacity to engage and influence our inherently social brains. While a small part of me enjoys saying, “I told you so,” the larger, more pressing concern is about the future we are stepping into. As I have argued elsewhere, understanding our own psychological vulnerabilities in the face of these increasingly sophisticated digital tricksters is the first crucial step in navigating this new and potentially perilous landscape.

Topics related to this post: AI | Biology | Economics | Evolution | Innovation | Personal | Psychology | Representation | Research

A few randomly selected blog posts…

Changing rules for tenure

I discovered a blog on academia called Lumpenprofessoriat. It links to some cool videos made by supporters of Barack Obama, but more importantly it has some thought-provoking postings and links to other blogs around the issue of academic tenure. One of the most...

Engineering Education, past & future II

A couple of weeks ago I made a presentation (with Neeraj Buch) to a group of engineering educators from India. This was a meeting organized by the College of Engineering and the Indo-US Collaboration for Engineering Education. Having made this presentation once I had...

TPACK newsletter #31,

TPACK newsletter #31,

The latest version of the TPACK newsletter (#31) can be found here December 2016 (pdf). All previous issues are archived here. A shout-out to Judi Harris for all the work that goes into this. As I had said in a previous post, based on Judi's...

How artists work

An interesting (and growing) collection of "habits, rituals and small (and occasionally big) methods people and teams use to get their work done. And in the specific anecdotes and the way people describe their own relationship to their own work." Kind of cool and...

The civilizing effects of technology

Martin Amis was recently interviewed in Guernica (Amis Unfiltered, Santiago Wills interviews Martin Amis). The interview covered a wide range of topics, literature, Obama, and a fascinating digression on the relationship between food and national character!  What...

A systems view of technology infusion

A systems view of technology infusion

One of the significant changes in my way of thinking about technology integration has been a shift in focus—away from designing training and programs that target individual teachers to designing systems (both at K12 and higher education levels) that support teachers...

We are hiring… join our team

We are hiring… join our team

Over the past year the Office of Scholarship and Innovation (OofSI) at the Mary Lou Fulton Teachers College, ASU has been engaged in supporting faculty research; creating digital solutions for learning; bringing collaborative design-based...

Learning futures: Designing the horizon

Learning futures: Designing the horizon

I was recently invited (along with Sean Leahy and Jodie Donner) to present at the Winter Games, Digital Immersive Experience organized by ShapingEDU at Arizona State University. Our talk was titled Learning Futures: Designing the Horizon. We described our session as...

How cool is that!

I just read on CNN that Obama's likely nominee for energy secretary is physicist and Nobel Laureate Dr. Steven Chu. What a contrast to the previous administration's science policy. (Actually it is still the current administration!) Has a novel prize winner ever served...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *