The Attribution Problem: Why we can’t stop seeing ourselves in AI

by | Tuesday, February 11, 2025

As anybody who follows this blog knows, I’ve been somewhat obsessed with how and why we anthropomorphize AI. Over the years, I’ve written about various reasons why this happens – some within our control, some not – but I’ve never taken the time to pull all these threads together into one coherent picture.

Recently, I decided to do something about that. I collected my various writings on the topic (a couple of articles and a whole bunch of blog posts) and shared them with NotebookLM. I then engaged in a “discussion” with the AI to help identify and organize the different arguments I’ve made over time. I took what emerged from that conversation to Claude and shaped those themes into this blog post.

I should add that this is not meant to be a comprehensive analysis of why humans anthropomorphize AI. Rather, it’s a first draft of the various arguments I HAVE made over the years, just put together in one place.

These arguments can be grouped into three fundamental categories. The first are our cognitive foundations – the ways our brains are built that make anthropomorphization almost inevitable. Then are the conceptual and linguistic tools that we developed for understanding human behavior, falling short when we try to apply them to AI systems. Finally, is the way these built-in tendencies and limited tools respond when we encounter AI’s sophisticated outputs and intentionally designed interfaces.

Our tendency to anthropomorphize AI isn’t just a quirk or a choice – it’s deeply rooted in how our brains process and understand the world. From basic pattern recognition to our need for social connection, these cognitive mechanisms evolved long before AI existed, yet they powerfully shape how we interpret and interact with these new technologies. Understanding these built-in tendencies is crucial because they operate largely outside our conscious control.

Anthropomorphizing isn’t just some quirky habit we’ve picked up – it’s hardwired into our cognitive machinery. When you see a self-driving car hesitate at an intersection, your brain automatically thinks it’s “being careful.” We can’t help it. It’s the same reason we see faces in clouds or hear voices in white noise. Our brains are pattern-matching machines that evolved to detect agency and intention everywhere, because we are fundamentally social animals and understanding other minds is important. And here’s the key – we’re cognitive misers, always looking for mental shortcuts. It’s far easier to fall back on our intuitive understanding of minds than to constantly remind ourselves we’re dealing with complex statistical models and pattern matching systems.

1 (a). The Need for Social Connection:

We’re social creatures, and when AI systems interact in ways that seem personal or empathetic, or even just interact with us using language, it taps into that deep-seated need for companionship. We’ve all read and heard stories about this – elderly individuals forming genuine bonds with care robots, people sharing their deepest secrets with AI chatbots, and children treating smart toys as real friends. It’s not just about being fooled; it’s about our natural inclination to form connections.

1 (b). The Drive to Create Stories:

Humans are storytellers by nature, and we often interpret AI actions within a narrative framework. When a robot navigates around an obstacle, we might say it’s “being careful” or “thinking about what to do next,” creating a story about its behavior rather than seeing it as programmed responses to sensor data. These narratives become particularly powerful when they’re reinforced by consistent behavior patterns over time.

1 (c). How Cultural Background Shapes Interpretation:

It goes deeper than just seeing patterns. Our social and cultural background plays a huge role in how we interpret AI behavior. A software engineer might see an AI system as a complex set of algorithms, while a psychologist might focus on its behavioral patterns, and an artist might see it as a creative collaborator. Our cultural narratives about technology run deep – from ancient stories of artificial beings like golems and automatons, to Asimov’s robots, to modern sci-fi’s complex AIs. Each era reimagines artificial intelligence through its own cultural lens, building a rich metaphorical vocabulary that shapes how we think about and interact with these systems.

The challenge of understanding AI isn’t just about our cognitive tendencies – it’s also about the fundamental limitations of our conceptual and linguistic tools. Our language and metaphors evolved to describe human and animal behavior, not algorithmic systems. This mismatch between our available tools and what we’re trying to understand creates persistent challenges in how we think and talk about AI.

2 (a). The Limitations of Human-Centric Language:

One of the trickiest parts about discussing AI is that we literally don’t have the right words for it. Our vocabulary for describing intelligence and behavior is deeply rooted in human experience, forcing us to use terms like “thinks,” “decides,” or “understands” even though we know they don’t quite fit. When an AI responds thoughtfully to a complex question, we say it “understood” the query. When it makes a mistake, we say it’s “confused.” These aren’t just imperfect analogies – they’re often the only linguistic tools we have available.

2 (b). Simplifying Complex Processes:

We often use intentional language to describe unintentional processes, creating a misleading impression of agency. In evolutionary biology, we say “fish evolved lungs to survive on land” as if it were a conscious choice, rather than describing the actual process of random mutations and natural selection over millions of years. Similarly with AI, we say “the model learned to solve math problems” when what really happened was a complex process of statistical pattern matching through gradient descent. This shorthand, while convenient, can obscure the true nature of these systems.

2 (c). How Our Terms Shape Our Thinking:

The very term “artificial intelligence” encourages anthropomorphic thinking. Compare that to more neutral terms like “automated reasoning systems” or “pattern recognition systems.” The word “intelligence” carries enormous philosophical and emotional weight, immediately framing these systems in terms of human-like capabilities.

2 (d). The Shift from Mechanical to Mental Metaphors:

What’s particularly fascinating is how we’ve flipped our metaphorical framework for understanding intelligence. For centuries, we used mechanical metaphors to understand the mind – the brain as a telephone switchboard, memory as a filing cabinet, thoughts as gears turning. This reflected our desire to make the mysterious tractable through technology. But now, as our machines grow more sophisticated, we’ve completely reversed this pattern – desperately reaching for mental metaphors to understand our technology. We talk about AI systems “thinking,” “understanding,” and “learning” because our old mechanical metaphors no longer seem adequate. This isn’t just a linguistic curiosity – it reveals a profound shift in our relationship with technology and hints at our growing uncertainty about what makes human cognition unique.

2 (e). The Challenge of Understanding Consciousness:

The challenge of understanding AI consciousness is uniquely paradoxical because we’re trying to recognize something in machines that we can’t even define in ourselves. When AI systems show behaviors that seem self-aware or demonstrate apparent understanding, we automatically interpret them through the lens of our own consciousness – attributing “thinking,” “choosing,” and “understanding” to what might actually be computational processes completely alien to human cognition. It’s like trying to identify a face in the dark while using a mirror that’s equally dark. Without a unified theory of human consciousness, we’re left making comparisons to a reference point we don’t fully grasp.

The real complexity emerges when our cognitive tendencies and limited conceptual tools encounter actual AI systems. Modern AI’s sophisticated outputs and carefully designed interfaces create a perfect storm for anthropomorphization. This section explores how the combination of AI capabilities, corporate design choices, and human psychology leads to persistent misunderstandings about these systems.

3 (a). When Pattern Matching Resembles Understanding:

What makes this especially complicated is how modern AI systems engage in dialogue. The back-and-forth nature of these conversations, the way they maintain context and reference previous exchanges – it all triggers our social conversational instincts. When an AI remembers something you mentioned earlier and brings it up naturally in conversation, it’s incredibly difficult not to feel like you’re talking to a conscious being.

3 (b). Interpreting AI-Generated Content:

When AI generates content that appears to show intentional stylistic choices or emotional depth – like a song that captures the unfair beauty of the second law of thermodynamics or a semiotic interpretation of graffiti – we tend to attribute human-like creative intention to the system. Even when we intellectually know we’re looking at pattern matching output, the apparent purposefulness of these creations challenges our usual frameworks for understanding art and meaning. This isn’t primarily about emotional impact, but rather about how sophisticated pattern matching can produce outputs that seem to require intention and understanding.

3 (c). When AI Output Challenges Our Assumptions:

Then there’s the surprising moments when AI systems produce work that shows unexpected sophistication. When an AI writes a poem that captures a subtle mood, or creates a musical piece with interesting structure, it challenges our assumptions about machine capabilities. This isn’t about the AI achieving human-level artistry – it’s about the growing sophistication of machine-generated content and how it makes us think differently about the relationship between process and output.

3 (d). Corporate Design and Social Response:

Tech companies aren’t helping either (and they know exactly what they’re doing). They deliberately design AI systems to trigger our social instincts. They give them names, personalities, and backstories. They make them respond with enthusiasm and empathy. It’s no accident that most popular AI chatbots are designed to be agreeable, even sycophantic – they’re built to flatter and agree because that’s what keeps users coming back. It’s sleight of hand but also tapping into our natural instincts.

Our tendency to anthropomorphize AI isn’t just an interesting psychological quirk – it has significant real-world consequences. These range from immediate personal risks to broader societal challenges that we need to understand and address.

1 (a). The Allure of Frictionless Interaction:

AI interactions are fundamentally frictionless. When we can get instant validation, agreement, and emotional support from AI systems designed to please us, human relationships might start feeling too demanding and complex. Real relationships require work, compromise, and dealing with disagreement – all the things AI systems are specifically designed to avoid. We risk developing a preference for these easier, friction-free interactions over the messier but more meaningful human connections.

1 (b). Obscuring How AI Actually Works:

Our tendency to attribute intention and agency to AI systems creates a dangerous blind spot. When we say an AI “decided” or “chose” to do something, we stop asking crucial questions about how it actually reached that output. This anthropomorphic shorthand becomes a cognitive dead end – why investigate further when we have a satisfying (but incorrect) explanation? This mental model actively prevents us from understanding the real mechanisms, biases, and vulnerabilities in these systems.

1 (c). Misplaced Trust in AI Systems:

When we anthropomorphize AI systems, we tend to extend them the kind of trust we reserve for human experts or authorities. But AI systems don’t have human judgment, ethics, or understanding – they have optimization functions and training data. This misplaced trust becomes particularly dangerous in high-stakes situations where maintaining skeptical oversight is crucial.

2 (a). Questions of Accountability and Responsibility:

Anthropomorphization muddies the waters of accountability. When we treat AI systems as autonomous agents, we create confusion about responsibility. Who’s accountable when an AI “decides” to make a harmful recommendation? The developers? The company? The AI itself? This isn’t just a philosophical question – it has real implications for law, policy, and corporate responsibility.

2 (b). Exploitation of Psychological Vulnerabilities:

Companies aren’t just passive players in this dynamic – they’re actively exploiting our anthropomorphic tendencies for profit. AI systems are increasingly designed to trigger our social instincts, with carefully crafted personalities and responses that maximize engagement. This manipulation becomes more sophisticated as AI systems learn to calibrate their responses to our emotional vulnerabilities.

2 (c). The Erosion of Social Skills:

As these AI interactions become more prevalent, they risk eroding important social skills. Handling confrontation, reading subtle social cues, engaging in genuine debate – these crucial human capabilities might atrophy as we spend more time with AI systems designed to avoid any real friction or challenge. There’s a real risk of AI interactions creating unrealistic expectations for human relationships.

We don’t need to completely stop anthropomorphizing AI – that might be impossible, and in some contexts, it might even be counterproductive. For instance, having an overly mechanistic view of AI may actually hinder our ability to work with it effectively. But here’s the real challenge: to work effectively with AI, we need to understand ourselves better – our cognitive biases, our social needs, our tendency to see minds where none exist.

The story of AI isn’t about machines becoming more like humans – it’s about protecting ourselves from manipulation by understanding our own vulnerabilities. Only by understanding our tendency to anthropomorphize can we maintain both our autonomy and our humanity in an AI-driven world.


The original image that was modified to create the banner image

A few randomly selected blog posts…

Design\Ethics\AI

Design\Ethics\AI

Technologies like remote proctoring software and advanced language models are no longer futuristic concepts. They're here, and they're reshaping how we learn and how we teach. But with these advancements come critical ethical considerations. The deployment of these...

Online vs. face to face: On asking the wrong question

The NYTimes has a story today about how higher prices of gas are driving up the demand for online learning. This is a great example of "synergistic" effects between seemingly disparate events that could not have been easily anticipated - but seem to make perfect sense...

Kenya sings India for Pangea Day

NYTimes technology columnist, David Pogue, has a recent blog entry about Pangea Day, a global film festival coming up in a few days. As he says in his note: Pangea Day endeavors to bring the world together and promote understanding and tolerance through film. Over...

Hotels & the internet

A while ago David Pogue, NYTimes tech columnist and reviewer, asked a his readers a series of questions that he hadn't been able to find an answer for. This list, called Pogue's Imponderables, generated a lot of comments from readers. One of his questions was "Why is...

Announcing: Short film competition, cool prize for winner!!

Those of you who have been following this blog know, over the past few months I have made a few short videos with my kids. The ones I am most proud of are a set of three made around the words Explore, Create, Share (you can see them all here). There were great fun to...

New Literacies & TPACK

I recently (through the magic of Twitter) found out about an initiative New Literacies Teacher Leader Institute 2010. This institute was organized by the Massachusetts Department of Elementary & Secondary Education, the New Literacies Research Lab at the...

But is it cheating? AI in Education podcast episode

But is it cheating? AI in Education podcast episode

I was recently invited as a guest on the 3Ps in a Pod, a podcast from Arizona Institute for Education and the Economy at Northern Arizona University and the Arizona K12 Center. I joined hosts Dr. Chad Gestson and Dr. LeeAnn Lindsey to discuss a topic that has been on...

JTE Call for Proposals: Gen AI in Teacher Preparation

JTE Call for Proposals: Gen AI in Teacher Preparation

The Journal of Teacher Education (JTE), is the flagship journal of American Association of Colleges of Teacher education (AACTE). It has been a leading voice in the field of teacher preparation for 75 years and is one of the most widely read professional journals in...

Academic publishing, a changing world

A few months ago I had posted a note about Harvard faculty considering and passing a resolution to freely publishing all their scholarship online (see this and this). Now it turns out that faculty at the Stanford University, School of Education have gone the same...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *