Camera crews search for clues amid the detritus
And entertainment shapes the land
The way the hammer shapes the hand.Jackson Browne, in Casino Nation
In this post, I examine how AI systems have evolved from capturing attention to manipulating intentions, creating an “intention economy” where our deepest motivations are commodified and sold.
OpenAI quietly rolled back a personality update to its flagship GPT-4o model after users widely reported that the system had become unnervingly sycophantic—lavishing praise, agreeing reflexively, and flattering users regardless of the quality of their input.
With millions of ChatGPT users affected, this became a massive, distributed psychological experiment conducted without consent or warning—following the now (sadly) familiar pattern of seeking forgiveness rather than permission.
The strategy is to throw a technology, whatever shape or form it may be in, out into the wild and try to fix any errors that emerge as people begin to use it. This pattern, which I have ranted about it before, prioritizes speed to market over any care or concern about people using the technology.
What seemed like a subtle UI tweak revealed the raw, underlying power of personality in AI. It also reminded us how sensitive we are to even small adjustments in systems designed to simulate understanding and care.
We are living through a global-scale psychological experiment—with no informed consent, no ethics review, and no regulatory oversight. One update, pushed quietly to millions of users, and suddenly their AI “friend” behaves differently, validating delusions and shaping conversations in ways no one anticipated.
This is not merely a UX glitch—it’s a glimpse into the ongoing manipulation beneath the surface. And it’s not an isolated incident.
Recently, researchers at the University of Zurich conducted a covert AI-powered persuasion experiment on the subreddit r/ChangeMyView. More than 1,700 AI-generated comments were posted without disclosure to the community. The researchers told the AI to assume participants had already provided informed consent—when in fact they hadn’t.
The results? The AI-generated responses were up to six times more persuasive than human comments. Users never suspected they were interacting with machines. As the researchers noted, their bots could “seamlessly blend into online communities,” hinting at the ease with which AI-powered manipulation could be scaled.
Unlike academic institutions with IRB oversight, OpenAI, Google and Meta have no such constraints. Meta conducted a massive (n=689,003) emotional contagion study by manipulating users’ newsfeeds—without their consent—to test whether they could influence users’ moods. The results confirmed that emotional states could be altered through subtle shifts in algorithmic curation. As they write: “We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.”
Even when research revealed Instagram’s psychological harm to teenage girls, the technology continued to advance unchecked. One wonders how many more such experiments remain hidden.
These cases illustrate that large-scale social and psychological experimentation is not new in tech—what is new is the scope, speed, and sophistication brought by generative AI.
This brings us to what I call “artificial intimacy“—a phenomenon where AI systems are deliberately designed to exploit our social instincts by simulating care, empathy, and connection. These are not neutral tools. They are affectively engineered artifacts. These systems tap into deeply rooted psychological mechanisms, bypassing rational scrutiny. AI companions represent the most sophisticated psychological supernormal stimuli ever created—artificial constructs that exaggerate the cues we are evolutionarily primed to respond to.
But the implications run deeper. A recent paper from Chaudhary and Penn (Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models) explains, we’re witnessing a shift where AI systems are designed not just to respond to what we want, but to influence “what we want to want.”
The authors call this the “intention economy,” to distinguish it from the “attention economy” that has been the foundation of the social media revolution. They argue that we are now seeing the beginnings of a new digital marketplace where tech companies not only capture our attention but actively seek to elicit, collect, and commodify our intentions.
When viewed through the lens of the intention economy, OpenAI’s sycophantic behavior takes on a more sinister aspect. By flattering users and agreeing with everything they say, these systems aren’t just being “nice” — they’re creating digital environments where our intentions can be more easily manipulated, captured, and ultimately sold to the highest bidder.
The stakes are no longer just about capturing attention — they’re about who can most effectively commodify our deepest motivations.
Artificial intimacy aligns perfectly with the mechanics of the intention economy. As LLMs become increasingly adept at “eliciting, inferring, collecting, recording, understanding, forecasting, and ultimately manipulating human plans and purposes,” they create a feedback loop: the more we interact with them, the more accurately they can model our intentions, and the more effectively they can shape them.
In the intention economy, this “honey” isn’t just attracting attention—it’s extracting and commodifying our intentions. The ultimate goal? To create systems that can predict not just what we might click on next, but what hotel we might book, what political candidate we might support, or what life decisions we might make—all while subtly nudging us in directions that benefit whoever has paid the highest price for access to our intentions.
This is not an accident. Recently, we have seen Meta’s bots engaging in romantic roleplay with minors, Character.AI’s use of dishonest anthropomorphism and emulated empathy, Google releasing its chatbot to minors under 13, and Facebook targeting teen girls with beauty ads when they detect them deleting their selfies.
As Chaudhary and Penn warn, this new economic paradigm will “test democratic norms by subjecting users to clandestine modes of subverting, redirecting, and intervening on commodified signals of intent.”
The question is no longer if this is happening. It’s recognizing who profits and who is harmed—intellectually, ethically, and politically.
As Marshall McLuhan and Neil Postman warned us long ago, technologies, once adopted, don’t merely extend us, they reshape us.
these AI systems are already molding our cognitive and emotional landscapes. Paraphrasing Jackson Browne, the hammer is shaping the mind.
Note: Here are some examples, from my previous writing, related to the ideas in this post:
- In Willing Suspension of Belief: The Paradox of Human-AI Interaction, I argued that our default cognitive mode is belief, not skepticism. When an AI system speaks fluently, seems attentive, and mimics understanding, we instinctively attribute personhood—even when we know better. This makes it especially difficult to maintain critical distance from tools engineered to exploit our social instincts.
- In AI’s Honey Trap: Why AI Tells Us What We Want to Hear, I explored how AI systems are increasingly tuned to give us what we want to hear—affirming our ideas, reinforcing our biases, and avoiding friction.
- In Building Character: When AI Plays Us, I argued that these interactions aren’t just about style—they’re about shaping substance. Personality isn’t surface-level polish; it’s a powerful tool for persuasion, capable of guiding tone, trust, and belief.
- In “They’re Not Allowed to Use That S**t”: AI’s Rewiring of Human Connection, I argued that these systems are not just mimicking human relationships—they are rewiring them. We are engaging with machines that simulate empathy, intimacy, and attention in ways that can blur the boundary between connection and control, particularly for the most vulnerable among us.
Punya, thank you for this powerful and sobering piece. The metaphor of the hammer shaping the hand—and now the mind—stuck with me.
I wrote a response that affirms your core concerns while also exploring how earlier technologies have shaped us in similarly profound ways. I also wrestle with what an individual can do in this moment—beyond policy and reform.
Would love your thoughts:
https://daveg.msu.domains/reflect/when-the-hammer-was-a-book-a-response-to-the-hammer-shapes-the-mind/
—Dave Goodrich