According to MIT Technology Review (AI can now create a replica of your personality) a new paper from Stanford and Google DeepMind researchers claims that a two-hour interview is enough for AI to create an accurate “replica” of your personality.
The idea that we can capture the essence of a person in a two-hour interview seems absurd – but then again, I’ve been wrong before about the capabilities of AI.
Still, digging deeper into their claims of “85% accuracy on personality tests,” specifically the Big Five Personality test. I found myself revisiting questions that have been nagging at me since I first started exploring the intersection of human psychology and artificial intelligence.
Here’s the thing: even our best tools for understanding human personality are surprisingly shaky.
As I was reflecting on the claims being made in this paper, I happened listen to an episode (Megapod: Why is there so much BS in Psychology) of Derek Thompson’s Plain English podcast featuring psychologist Adam Mastroianni discussing, of all things, the Big Five personality test. Just to give some context, the Big Five framework of personality traits is psychology’s golden standard that seeks captures all aspects of human personality in five (some argue six) factors.
In this podcast (and in a substack post), Mastroianni argued that it was not entirely clear that the Big Five was all that it was made out to be. As he said, despite millions of dollars and thousands of studies, new research is showing that the Big Five personality test barely outperforms tests like Myers-Briggs or the Enneagram in predicting real-life outcomes. And as Adam writes:
The Enneagram was apparently invented by a series of spiritual teachers, and until Isabel Briggs Myers popularized her eponymous personality test, she was most famous for writing racist detective stories.
So what does it mean when our AI “replicas” achieve high accuracy on tests that themselves might not capture the full complexity of human nature? The language we use here is telling – we talk about “replicas” and “accuracy” as if personality were something that could be photocopied and measured like a physical object.
This connects to something Mohsin Hamid said that’s stuck with me since 2022: “if we want to be able to predict people, partly we need to build a model of what they do.” Social media algorithms already do this at scale – predicting and guiding our behaviors based on collective patterns. But this new research claims something more ambitious: not just modeling behavior, but replicating personality itself. It’s a leap from prediction to replication that deserves more scrutiny.
What fascinates me most is how we’ve arrived at a unique moment in human history – we’re trying to create “replicas” of human minds using systems we understand primarily through metaphors of mind. Both our brains and these AI systems are black boxes, and we’re essentially trying to understand one impenetrable system by comparing it to another. Working on this idea has led me to see how our metaphors for these systems – “replicas,” “digital twins,” “artificial intelligence” – might actually be constraining our understanding of both the technology and ourselves.
The MIT Technology Review article quotes one of the researchers as saying:
If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made—that, I think, is ultimately the future.
It is actually quite amazing how have we so readily accepted that these “yous” are truly replicas of our personalities? Does this replica stay static and un-evolving? Or does it change with experience? What kinds of decisions will my replica make? Can it go to meetings in my place?
And even if it were possible … boy do I see interesting challenges. What happens in a world where I can send my AI replica to handle a difficult conversation, make decisions on my behalf, or even break up with a partner? We’re already seeing the first warning signs of where this might lead. Eric Schmidt, former Google CEO, recently raised concerns about young men creating “perfect” AI romantic partners, warning that these relationships might actually increase loneliness and lead to obsession. It’s a stark reminder that when we try to engineer away the complexities of human interaction, we might end up amplifying our isolation instead.
Beyond these practical and psychological implications, this creates a fascinating feedback loop – these systems won’t just replicate our decision-making, they’ll reshape it.
Freud once noted that “In psychology we can describe only with the help of comparisons… we are forced to change these comparisons over and over again, for none of them can serve us for any length of time.”
Perhaps instead of asking how accurately AI can replicate human personality, we should be asking different questions altogether. What new metaphors might help us better understand these emerging technologies?
And as Sherry Turkle reminds us, human relationships are “rich, demanding and messy.” What happens to human identity and authenticity in a world where that essential messiness is engineered away?
0 Comments