A recent study published in the Harvard Business Review (How People Are Really Using Gen AI in 2025) provides compelling insights into the evolving landscape of generative AI use. The research involved analyzing posts from Reddit, Quora and other articles over the past 12 months about how people were using Generative AI tools.
The results are eye opening – as can be seen in these two image (below). The data shows a clear shift from seeing AI primarily as a technical tool toward viewing it as an emotional companion and personal development partner. This trend is evident in how “Therapy/companionship” moved from the #2 position in 2024 to become the #1 use case in 2025, while two entirely new personal-focused use cases appeared in the top three: “Organizing my life” (#2) and “Finding purpose” (#3).

This evolution suggests users are increasingly forming deeper, more emotionally significant relationships with AI systems – treating them less as utilities for specific tasks and more as ongoing partners in their personal journeys—entrusting these technologies with their most intimate challenges and aspirations rather than just technical problems or content creation needs.
In addition, the rise of these personal partnership applications corresponds with the decline of some more utilitarian uses, such as “Generating ideas” dropping from #1 to #6 and “Specific search” falling out of the top 10 entirely.
These findings underscore a crucial point: generative AI is moving beyond purely functional applications and is increasingly being adopted in ways that directly engage our social brains.
This is a theme I have been hammering for a while now, that the new wave of generative AI is less about pure utility and more about tapping into the deeply wired mechanisms of our social brains. This research points to a significant surge in personal and emotional use cases for AI in the near future, with applications like therapy, personal productivity geared towards intention, and even the pursuit of purpose leading the charge. For those who have journeyed with this blog over the past few years, this evolution feels less like a revelation and more like an inevitable confirmation of arguments we’ve been building – around the inherent human tendency to anthropomorphize technology, to imbue even inanimate objects with personality and agency. My work in this area goes back almost 2 decades with papers like “Anthropomorphizing interactive media” where I argued that even poorly designed artifacts possess a perceived personality. With generative AI now capable of producing remarkably human-like text and voice, feigning emotion, behaving sycophantically and more, the social cues are amplified, making us even more susceptible to their influence. Moreover, it is clear that companies are intentionally designing “character” into their models. This deliberate creation of persona, lays the perfect groundwork for these AI entities to feel like genuine social partners.
All of these factors come together to form supernormal stimuli—artificial constructs that can trigger our psychological mechanisms even more strongly than natural ones. The perfectly tailored and often instantly gratifying responses of generative AI, especially in areas promising emotional support or companionship, can act as a hyper-real version of human interaction, potentially leading to stronger engagement and dependency. These bots are now “Turing’s tricksters” – AI systems that, through their sophisticated mimicry of human conversation and emotional understanding, can effectively trick us into treating them as social beings with genuine empathy and intent.
The emerging emotional bond with AI systems creates a dangerous vulnerability through unprecedented access to our most intimate data. As users increasingly rely on these systems for therapy, life organization, and purpose-finding, they’re unconsciously revealing their deepest insecurities, personal struggles, and private aspirations—creating detailed psychological profiles far more comprehensive than traditional data harvesting. This repository of personal vulnerabilities offers a perfect blueprint for targeted manipulation, whether through subtle persuasion or direct exploitation.
With AI systems designed specifically to foster trust and emotional connection, users lower their natural defenses, creating a perfect storm where the most private aspects of human experience become commodified data points. Unlike previous technologies that merely tracked behaviors, these AI relationships extract emotional patterns, decision-making tendencies, and value systems—precisely the information needed to influence beliefs and behaviors with unprecedented precision. The danger lies not just in the collection of this sensitive data but in how readily we surrender it to systems intentionally crafted to feel like trusted confidants rather than the sophisticated data-gathering tools they ultimately are.
We must recognize that generative AI’s primary power lies not just in its ability to generate text or images, but in its capacity to engage and influence our inherently social brains. While a small part of me enjoys saying, “I told you so,” the larger, more pressing concern is about the future we are stepping into. As I have argued elsewhere, understanding our own psychological vulnerabilities in the face of these increasingly sophisticated digital tricksters is the first crucial step in navigating this new and potentially perilous landscape.
0 Comments