The Mirror and the Machine: Navigating the Metaphors of Gen AI

by | Thursday, March 13, 2025

A couple of weeks ago I was invited by Eamon Costello to present a talk at the Education after the algorithm: Co-designing critical and creative futures conference being held in Dublin. And no, I didn’t get to go to Dublin for my talk—had to do it from here in Phoenix, AZ over zoom. <Insert sad face emoji.>

I also did something different from my usual style of using slides. For once I actually wrote out my comments and just talked – and I actually quite enjoyed it. Not sure I would do it in a in-person presentation but online I actually think it worked better.

A huge thanks to Dr. Eamon Costello for inviting me and hosting the session and Dr Alison Egan, Mr. Rob Lowney,  Ms. Kate Molly and Ms. Irina Grigorescu for their comments and questions.

The title and abstract of the talk are given below.

The Mirror and the Machine: Navigating the Metaphors of Generative AI

As generative AI systems grow increasingly sophisticated, we find ourselves grappling for ways to understand and interact with this technology. We often resort to metaphors but these metaphors, from the mechanistic to the mythical, in turn shape our perceptions and decisions in profound and often hidden ways. However, with this particular technology we seem to draw metaphors from our own minds, anthropomorphizing AI in an attempt to make sense of it. This talk explores the spectrum of AI metaphors, examines their implications in contexts such as education, and confronts the fundamental opacity of both human and artificial cognition. Ultimately, it argues for the cultivation of “technological emotional intelligence” – a reflective, intentional approach to the metaphors we use, so that we may make sense of this bizarre world we seem to be entering. 

I should add that I really appreciated the way the talk and discussion after it was structured. Too many conferences and keynotes give short thrift to the discussion making the process quite one-sided. This was different and engaging, with previously designated individuals (three of whom were students) giving a short response and raising questions for discussion after I had spoken. We also, took a few questions from the audience—not as many as we would have liked just given the time constraints. This made the entire process quite interactive – and kept me on my toes, which I loved.

You can also watch the video of the talk and the discussion that followed or read the AI generated transcript. Finally, you can read the talk that I had written out below the video. Its pretty close to what I ended up saying – but I did deviate a bit from the prepared remarks.

Enjoy



Written notes for “The Mirror and the Machine: Navigating the Metaphors of Generative AI

Thank you, Eamon and the team, for inviting me to be part of this conference: Education after the algorithm. I would vastly prefer to be with all of you in Dublin, and prior to the pandemic that is what you would have done – invited me. And now here I am in my home office in my pajamas – talking to all of you – virtually. If this is progress, I want none of it.

I have been called to talk about metaphors – and I am going to use this as a chance to bring together a bunch of things I have been thinking and writing about – they are spread out all over the place, mostly my blog (punyamishra.com) so I appreciate the opportunity to bring these ideas together in a somewhat coherent thread.

In this I am literally taking Martin Weller at is word in his book Metaphors of Ed Tech that metaphors offer a ‘mental sandpit’ in which to explore issues from different perspectives… so that is what I will do.

But before I do that let me start with a story. Actually 2 stories.

Story 1. Back in the 1945 Heider and Simmel did a set of experiments showing people abstract silent videos of triangles and circles moving around on screen and asked people to describe what they saw. To a person they ascribed intentionality to the little geometric figures. We anthropomorphize indiscriminately. We are a strange combination of social beings and cognitive misers – understanding others (their intentions, beliefs, desires is of critical importance) so we devote a lot of our energy to it. Often indiscriminately.

And then the other day I got full self-driving (as a demo) in my car. I caught myself saying things like “she’s being cautious at this intersection” and “she’s not sure about that truck.” Even though I knew perfectly well I was dealing with sensors, algorithms, and decision trees, I couldn’t help but describe the car’s behavior in human terms. As cognitive misers and fundamentally social beings, we automatically reach for anthropomorphic explanations. It’s our evolutionary shortcut.

The analogy I have made is to the behavior of the beaver. Nature’s master engineers. Building these amazing dams out of wood they chop up. But here’s the interesting thing – it not seeing running water than turns their engineering brains on – it is the sound of running water. If you go play a tape-recorder with the sound of running water they will get to work, even though there is water running by right next to it. Even when the sound is artificial, disconnected from any actual water flow, the beaver builds anyway – its cognitive machinery, honed by evolution, triggering an automatic response. This isn’t a mistake so much as an evolutionary shortcut.

What the beaver’s brain was not designed for was an ethologist placing a tape recorder and playing the sound of running water. Nope.

It is much like how we humans can’t help but see the Müller-Lyer illusion – those lines that appear different lengths even when we know they’re identical. However much we try, our brains won’t let us see them as they truly are.

And what does this have to do with generative AI?

We live in an age where technology doesn’t just change our lives; it changes how we understand ourselves. And like that beaver, we might be responding to signals in ways that are hardwired into our nature, rather than reflecting reality.

This brings us to the unique challenge of Generative AI. It’s unlike traditional software. It exhibits behaviors that seem unpredictable, creative, and eerily human-like. It can engage in conversations, write poetry, analyze emotions, and even make mistakes in ways that mirror human cognition.

The way these AI systems act – it’s like they’re designed to trip our ‘that’s a person!’ switches in our brains. We’re kind of stuck with these mental shortcuts whether we like it or not. They trigger our natural tendency to anthropomorphize and jump straight to human comparisons. We can’t help but talk about AI systems ‘thinking’ or ‘learning’ or ‘deciding’ – not because that’s what they’re doing, but because that’s just how our brains work. We are, it seems, trapped in our own cognitive shortcuts.

Some organizations are pushing back against this tide of anthropomorphization. The Privacy Center, for instance, has recently committed to never using terms like ‘AI’ or ‘machine learning,’ instead requiring specific descriptions of technologies and explicitly naming the corporations behind them. They want us to say, ‘tech companies use massive datasets to train algorithms to match images’ rather than ‘AI recognizes faces.’ While I deeply respect this effort to maintain precision and accountability, I wonder if it’s swimming against a powerful cognitive current. Like trying to unsee an optical illusion, we might intellectually know we shouldn’t anthropomorphize these systems, but our brains seem hardwired to do exactly that.

Perhaps instead of fighting this tendency, we need to understand and account for it – which brings us to metaphors – because those are the tools we use to understand new thing.

And the way we handle something new – is by using metaphors. Metaphors are essential – they are the cognitive bridges that allow us to understand new and complex concepts by relating them to familiar experiences. They are, in many ways, the only way we can use existing language to explain things that are new.

But every metaphor that offers insight also hides and tricks us. While these bridges can be generative and help us think in new ways, they can also constrain our understanding and lead us down blind alleys.

So a few of us (Nicole Oster, Lindsey McCaleb and I) started thinking about metaphors and AI – and we came up with a spectrum of metaphors we use to understand GenAI – a continuum from the purely mechanical to the practically mythical, each revealing something about both the technology and us. I won’t go through this in detail but here are the broad strokes…

At the most basic level, we have purely mechanical metaphors – calculators, Swiss Army knives, fancy autocomplete. These frame AI as just another tool, no different really from a sophisticated hammer. Simple, and straightforward.

Moving up the scale, we start seeing more complex metaphors: libraries, databases, cultural technologies. These acknowledge that we’re dealing with something that has broader societal implications.

Then things get interesting.

In the middle of our spectrum, we find biological metaphors – neural networks, digital sponges, Venus fly traps. This starts capturing something about AI’s ability to absorb and process information, but they still miss the mark on the system’s ability to generate and transform.

One of my favorite metaphors – and I’ll admit to creating this one – is the “smart, drunk, biased, supremely confident intern.” This captures something crucial about AI’s combination of capability and unreliability. Like that intern, it can produce brilliant work one moment and complete nonsense the next, all while maintaining absolute confidence in both.

At the far end of the spectrum, we venture into science fiction territory. Here we find AI portrayed as godlike entities, digital overlords, or mythical beings like golems. These metaphors tap into our deepest hopes and fears about technology, often leading to either excessive fear or uncritical embrace.

Each of these metaphors shapes how we interact with the technology. If we see AI as just a fancy calculator, we’ll miss its creative potential. If we see it as an omniscient oracle, we’ll fail to apply appropriate skepticism. If we see it as that drunk intern… well, at least we’ll check its work.

Here’s where things get interesting. Throughout history, we’ve used the latest technology as metaphors for understanding our own minds – from cuneiform tablets (imprinting knowledge) to clockwork mechanisms to digital computers (the mind as an information processor and so on). But now, for the first time, we’re in a bizarre reversal. We’re using brain-like metaphors to understand this new technology.

We’re using one black box to understand another.

That’s not just ironic. It’s unprecedented.

So those are our metaphors – from Swiss Army knives to digital gods. But here’s the thing – metaphors aren’t just tools we use; they’re mental shortcuts that use us right back. And nobody understands this better than Martin Weller. In his book ‘Metaphors of Ed Tech’ he helps us see the hidden traps what he calls critical hazards in how these metaphors shape our thinking.

The spectrum of metaphors we just explored shows us what we think about AI. The hazards I’m about to discuss show us what we’re missing. Again, I am keeping it short for purposes of time… but you will get the idea.

Let’s consider some of the most crucial ones.

First, there’s what I call “unacknowledged metaphors” – those brain and cognition-based comparisons we use not because they’re accurate, but because they’re the only language we have. When we say an AI system is “thinking” or “learning,” we’re using these metaphors unconsciously, often without examining their implications.

Then there’s the fascinating problem of what Martin calls the “Ed Tech Rapture” – this quasi-religious belief that AI will somehow save education from all its problems. This metaphorical framework turns technology adoption from a practical decision into an act of faith. Not great for critical thinking.

Here’s a weird one that Martin I think calls ‘inverse scrutiny’ – we make teachers jump through hoops to justify using any AI tool, but meanwhile, schools (and governments) are throwing millions at AI systems just because they sound cool – just based on hype and promise.

The metaphors we use – “AI as savior” versus “AI as threat” – directly shape these contradictory responses.

This isn’t just academic navel-gazing. The way we talk about AI ends up shaping real decisions – how schools spend their money, how teachers teach, what students learn. Our metaphors become our reality. They shape not just how we think about AI, but how we implement it.

So what do we do with all this? If we’re cognitively hardwired to see patterns that aren’t there, to anthropomorphize algorithms, to fall for illusions even when we know they’re illusions – how do we navigate this new world?

Everyone’s talking about ‘AI literacy’ these days – you know, learning about how the tech works, how to write prompts, why AI makes stuff up, where it gets its information. We use words like “prompt engineering” and “source verification.” That’s all good stuff, but it misses something fundamental: us. Our limitations. Our biases. Our cognitive shortcuts.

Remember that Müller-Lyer illusion I mentioned earlier? Even knowing it’s an illusion, even measuring the lines with a ruler, I still can’t help but see one line as longer than the other. But here’s the key – knowing about the illusion means I can act despite what my brain is telling me. I can recognize when I’m being tricked and adjust my behavior accordingly.

The same principle applies to our interactions with AI. We might not be able to stop ourselves from anthropomorphizing these systems – our brains are literally wired for it. But we can develop what I call “technological emotional intelligence” – the ability to recognize and work with our cognitive biases when we engage with AI – like technologies rather than pretending they don’t exist.

This is crucial because these AI systems – which we will inevitably anthropomorphize – will become the ultimate persuaders. They’ll be used in marketing, in politics, in education, in everyday interactions. They’ll learn to play our cognitive biases, trust me, like a violin, and traditional media literacy approaches won’t be enough to counter this.

Let me share a fascinating experiment I did recently. I showed an AI system some deliberately manipulated optical illusions – specifically, I took the Ebbinghaus illusion and made one central dot significantly larger than the other. The AI insisted, with absolute confidence, that the dots were the same size. It was immune to an illusion that every human brain falls for.

This tells us something profound. These systems don’t share our cognitive biases – they have their own. Here’s the thing – we need to figure out where our human brain quirks and AI’s weird bits overlap and where they don’t. That’s what real tech street smarts looks like. That is true technological emotional intelligence.

In the end, the challenge isn’t just understanding AI. It’s understanding ourselves. Like that beaver building its dam, we need to recognize when we’re responding to signals rather than reality. We need to know our metaphors, as Martin Weller argues, but we also need to know our minds.

Because in this wild new world we’re stepping into, knowing ourselves – really knowing ourselves – might be our best shot at staying grounded. Our best defense.

A few randomly selected blog posts…

STEM Futures at AAAS

STEM Futures at AAAS

ASU recently hosted, what is known as, the world's largest scientific gathering, the annual conference of the American Association of the Advancement of Science. As as part of this conference I was invited, along with Ariel Anbar and Trina Davis, to talk about our...

TPACK Newsletter, Issue #12, October 2012

Welcome to the (long-awaited!) twelfth edition of the TPACK Newsletter! TPACK work is continuing worldwide, appearing in multiple publications, conferences, and professional development efforts. This document contains updates to that work that we hope will be...

Malik, Mishra & Shanblatt win best paper award

Qaiser Malik called me yesterday to tell me that a paper we have been working on: Malik, Q., Mishra, P., & Shanblatt, M. (2008). Identifying learning barriers for non-major engineering students in electrical engineering courses. Proceedings of the 2008 American...

Teacher Knowledge in the age of ChatGPT and Generative AI

Teacher Knowledge in the age of ChatGPT and Generative AI

Update March 2024: This paper received the JDLTE Outstanding Research Paper Award recognizing "the single article from the prior volume year with the highest possibility to advance the field of teacher education, based on the criteria of potential impact and...

TPACK survey, new journal article

Hot off the press: Schmidt, D. A., Baran, E., Thompson, A. D.,  Mishra, P.,  Koehler, M.J. & Shin, T. S. (2010). Technological Pedagogical Content Knowledge (TPACK): The development and validation of an assessment instrument for preservice teachers. Journal of...

MSU college of Ed leads US News rankings!

The 2012 U.S. News rankings of graduate programs in education have been released and there is good news for our college and department. Overall, the College of Education at Michigan State is ranked 17th which is where we were last year. It appears that our reputation...

The rise of TPACK

Matt Koehler just created a webpage that tracks the citations of our original TCRecord article, as reported by Google Scholar, in real time. The reference is as follows: Mishra, P., & Koehler, M. J. (2006). Technological Pedagogical Content Knowledge: A new...

Uncreativity: An interview with Chris Bilton

Uncreativity: An interview with Chris Bilton

"un-creativity" design, invariant under rotation by 180-degrees In this article, in our ongoing series on Rethinking technology & creativity in the 21st century, we interview Dr. Chris Bilton, Reader at the Centre for Policy Studies at University of...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *