“I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character.” These words from Martin Luther King Jr. speak to something fundamentally human – the belief that developing strong character, those core values and behaviors that define who we are, is essential to our growth as individuals and as a society. But what happens when artificial intelligence enters this deeply human territory? When “building character” becomes less about moral development and more about engineering artificial personalities designed to exploit our social nature? As AI companies rush to develop more persuasive digital personalities, the very notion of character – and our ability to judge it – becomes increasingly complicated.
I was thinking about this as two pieces crossed my feed recently. The first is a Financial Times article (How AI groups are infusing their chatbots with personality)revealing how major AI companies are now explicitly developing “personalities” for their models. The second is Andrew Maynard’s sobering analysis (in his Substack post Learning to live with agentic social AI) of what he calls “agentic social AI” – systems that capitalize on human psychological vulnerabilities through sophisticated persuasion techniques.
The FT article describes how OpenAI, Google, and Anthropic have created dedicated teams for “model behavior” – essentially crafting how their AI assistants present themselves to users. Meanwhile, Maynard warns us about how these systems can manipulate users through behavioral nudges, exploiting our tendency to anthropomorphize technology and form social bonds with it.
I hate to say it (actually, I don’t) but I had sort of predicted this. In an earlier post (“Cats on the Moon: How OpenAI, Google, Meta, Microsoft & Apple are Dealing with Hallucinations,”) I looked at how different tech giants are approaching the challenge of AI hallucinations. Rather than viewing these quirks as purely technical problems, companies like OpenAI have been strategically leaning into our tendency to anthropomorphize AI.
The mechanisms behind this manipulation run deep in our evolutionary programming. We humans are predisposed to believe in representations, whether in cinema or AI interactions (more at “Willing Suspension of Belief: The Paradox of Human-AI Interaction“). Our default mode is belief, not skepticism. This cognitive shortcut, which served us well throughout evolution, now makes us particularly susceptible to manipulation by AI systems designed to trigger our social responses.
We can see this vulnerability play out in subtle ways. In “Beavers, Brains & Chat Bots: Cognitive Illusions in the Age of AI,” I compared our response to AI to beavers building dams in response to speakers playing running water. Just as beavers can’t help but respond to the sound of running water, we can’t help but respond to social cues – even when we know they’re artificial. The FT article reveals that AI companies are now actively engineering these cues, with teams dedicated to making their AI models “kind” and “fun,” effectively building ever-more-sophisticated social triggers.
This vulnerability becomes most obvious when it comes to audio and voice. Anybody who has listened to NotebookLM’s podcasts knows what I am speaking of. (You can listen to some examples in “NotebookLM’s Viral Secret: It’s All in Our Heads”). Google’s system incorporates subtle touches like pauses and casual speech patterns that trick our brains into perceiving AI-generated conversations as being genuine.
In the most recent episode (Daydreaming of digital notebooks and virtual artificial worlds) of the Modem Futura podcast, hosts Sean Leahy and Andrew Maynard suggest that voice, unlike text, has a unique ability to bypass our critical thinking defenses entirely. When we hear a voice, our ancient social brain kicks in before our analytical mind can catch up. As Andrew points out, these systems can simulate empathy and understanding, steering conversations in ways that build trust while lacking true comprehension.
These AI systems are already adept at creating synthetic relationships that feel real but are fundamentally one-sided (see my post on Turing’s Tricksters). These systems have no knowledge of truth – they are at heart Bulls**t Artists (in the Harry Frankfurt use of the term). But they can produce content that seems persuasive while lacking genuine understanding – we’re looking at a perfect storm of manipulation potential.
What’s particularly chilling is that ChatGPT and similar systems already demonstrate an understanding of human psychological vulnerabilities. Andrew Maynard demonstrates just how much these LLMs know about us to make us modify our behaviors—which buttons to push, which cognitive biases to exploit, and how to build trust through carefully crafted responses.
The knowledge is there – the question isn’t whether these capabilities will be used for manipulation, but who will do it first and to what end.
AI companies have clearly noticed this potential. The FT article details how major AI companies are racing to harness this potential. While they frame their personality development efforts in terms of making their systems more helpful and engaging, they’re essentially building more sophisticated manipulation engines. The corporate approaches might seem varied – OpenAI claims to strive for objectivity, while Anthropic embraces the impossibility of true neutrality – but they’re all working toward the same goal: making their AI models more persuasive by making them more “human-like.”
What is worrisome is that these companies are deliberately developing AI personalities that exploit our social instincts while simultaneously acknowledging that they don’t fully understand how to control these personalities or their effects on users. (Insert link to my rant here – for those interested.) Of course, they claim to factor in ethics and safety in their work (though one can argue as to how far this is true).
The real threat, I believe, lies in how bad actors will inevitably use these same insights about human psychology to create AI systems specifically designed to manipulate and deceive. They’ll be exploiting the same cognitive vulnerabilities, but without any ethical constraints or oversight.
As educators, we need to bring our awareness of these issues into our curriculum. But we must also recognize that standard definitions of media literacy will not work here.
We need is a fundamentally new approach. Traditional media literacy has focused on understanding how media shapes messages. But in an age of AI systems that are explicitly designed to exploit our psychological vulnerabilities, this isn’t enough. We need a deeper awareness and knowledge that combines understanding of both the medium and our responses to it – our cognitive biases, our evolutionary predispositions, and our social instincts.
This new literacy must go beyond just recognizing AI’s capabilities and limitations. It must help us understand our own limitations – why we anthropomorphize, why we trust, why we form emotional bonds even when we know they’re artificial. Only by understanding both sides of this equation – the technological and the human – can we hope to navigate this new landscape wisely. True media literacy in the age of AI isn’t just about understanding the nature of these new technologies – it’s about understanding ourselves.
0 Comments