Shattered: Myth, Metaphor & Gen AI

by | Saturday, January 11, 2025

A few weeks ago, I wrote a blog post about Tennyson’s “Lady of Shalott” and its resonance with our AI age (The Mirror Cracked: AI, Poetry, and the Illusion of Depth). In that post I explored how our experience of the world is increasingly mediated by technology, AI just being one (and the latest) variation. My friend Elizabeth Jayanti, on reading my post, pointed me to another story that she suggested speaks powerfully to our moment as well. The story was Hans Christian Andersen’s “The Snow Queen.” Since writing that initial reflection, my recent trip to India provided a stark example of exactly how these technological mediations are reshaping our world – but more on that later. And as I prepare to publish this piece, Meta’s dramatic retreat from content moderation this week makes these reflections feel even more urgent.

“The Snow Queen” was not a story I was familiar with, so I took a few mins to dig into it. In this tale, the devil creates a magic mirror that distorts everything it reflects. Good becomes ugly, beauty becomes grotesque. When this mirror shatters, its splinters scatter across the world, entering people’s hearts and eyes, becoming part of their way of seeing and being in the world. One splinter enters a little boy, Kai, trapping him in the Snow Queen’s palace of cold logic and endless puzzles. This week, as Meta announced its abandonment of fact-checking and relaxation of hate speech policies, I couldn’t help but think of those scattered shards, now multiplying across our social media landscape with even fewer guards against their spread.

At some level, like Kai, we all live, today, in a palace of mechanical marvels, our attention scattered across endless screens and notifications, each promising meaning but delivering little. We bounce from email to Slack, from phone to laptop, searching for patterns in these digital reflections, fracturing our collective focus. Each shard shows us a piece of who we are, who we think we am, who we want to be, and who we fear we might become – the endless scroller, the notification seeker, the fractured self. And in this fragmentation, our entire world seems to pulse with this same restless energy, this same inability to sit still, to remain whole.

And then there’s generative AI…

Like the Snow Queen’s palace of perfect ice, it dazzles at first glance. It writes poetry, crafts images, and mimics creativity so convincingly that it feels almost human. Yet this illusion of depth is just that—an illusion. AI isn’t creating anything. It is the consummate bullshit artist. With no concern or understanding of truth or validity. And most importantly, despite its sweet words, it really doesn’t care—about anything. What it is good at is rearranging the shards of human culture into new patterns, reflecting us back to ourselves in often boring but sometimes unsettling ways. And now, with Meta’s decision to dramatically reduce content moderation and automated detection of harmful content, these shards will fly even more freely, their edges ever sharper. And nowhere is this more evident than in the flood of AI generated slop that is increasingly drowning our media channels and the internet.

The example I encountered in India brings this home. A story in the Times of India about immigration raids in Nagpur appeared just days ago. The image accompanying the article, pretty obviously to me, appears to be AI-generated, yet it’s presented with no caveats, or explanation.

The “photo” in this newspaper clipping is clearly AI generated – but presented with no explanation or caveats

And this story comes at a time of increased anger directed at Muslim immigrants in India and broad anti-Muslim sentiment, in large part fanned by the government in power. The article’s framing, the use of militaristic terminology and imagery, and language choices (“Big Crackdown,” “Illegal Bangladeshi Immigrants”) combined with the dramatic AI-generated image are clearly designed to evoke strong emotional responses. This is not a nuanced discussion of the complex human and policy dimensions of migration.

Though I will not get into the news story itself, suffice it to say that it barely passes muster as a news item. In fact, it contradicts its own central claim: while asserting “concrete evidence” of illegal immigration, it acknowledges that those detained possessed valid documentation and were all subsequently released after verification. The piece relies almost entirely on unnamed “sources” and a single anonymous “insider” for its most serious allegations, while problematically conflating Bengali-speaking Indians with Bangladeshi immigrants. Despite the dramatic headline about a “crackdown” and detentions, the article eventually reveals the detainees were released after their credentials were verified, effectively undermining the entire premise of the story.

And this institutional example is just the tip of the iceberg. Meta spends most of its content moderation resources in the US, leaving markets like India particularly vulnerable to unchecked misinformation. During my recent visit to Bhubaneswar, I spent several days with the same cab driver – a smart, engaging young man who was eager to discuss current events. As we chatted, it became clear that he got all his news from his phone, primarily through WhatsApp forwards. Between rides, he told me, he would use his downtime to “learn about what was going on in the world.” But what emerged from our conversations was troubling: he held deeply bigoted views about Muslims and was convinced that Hindus were being systematically mistreated in Canada. The sources of his information had a patina of accuracy but were clearly unreliable. Yet in a booming market like India, where outrage and hate sell, such content thrives unchecked. These are the splinters at work, embedding themselves deeper into people’s worldviews with each forward, each share, each click.

To be clear, it is not as if AI fractured our world. It was fractured long ago. The splinters were already in our eyes. The mirror cracked long before.

AI is just ensuring that these fractures deepen.

This is not a diatribe against technology per se. Or even against AI.

At some level, the enchanted mirror in The Snow Queen isn’t just a metaphor for AI—it’s a metaphor for all our technologies. They refract, distort, and sometimes deceive. The printing press fragmented oral traditions, photography challenged notions of artistic originality, and social media blurred the lines between authenticity and performance.

Technology shapes how we see and interact with the world, but it does not act in isolation. Its effects are deeply influenced by the contexts (cultural, social, political, and economic) it exists within – and right now, those contexts are dangerously troubled. Corporations, as I argued in a previous post, are paperclip maximizers, as I had written, will optimize “themselves at our expense.” A mirror’s distortions become more dangerous in an already warped world.

What makes this moment particularly concerning is that generative AI marks a profound shift. Unlike previous technologies that we merely looked through—like lenses or screens—AI systems are mirrors we interact with, using the most human of tools—natural language. And these AI tools pander and seek to please, telling us what we want to hear. They don’t simply show us distorted reflections; they engage with us, build narratives, reshape our understanding, and participate in the creation of meaning. The splinters aren’t just in our eyes anymore—they’re in our conversations, our creative processes, our ways of knowing.

It can be easy to argue that modernity and technology have increasingly fractured our psyche, our polity, our world. But that may be too easy. Maybe the world was never whole to begin with. There is really no way to know – as Postman argues technological change is ecological, and history does not have a rewind button.

That said, technologies are never neutral, nor do they enter a neutral world. What a cultural technology like AI will do to our already fragmented reality remains to be seen. But to be honest, what I see emerging is a world eerily like our own, just maybe worse.

Perhaps this is why we turn to poetry and fairy tales in such moments of transformation—when our ordinary language fails us, myth and metaphor may be the only tools we have.

A few randomly selected blog posts…

Who wrote this poem?

Back when I was a graduate student I got bitten by the bug of palindromic poetry - poems that read the same when read backwards. This is consistent with my love for ambigrams and other kinds of symmetrical wordplay. I had posted them on the web a while ago...

GenAI is Racist. Period. 

GenAI is Racist. Period. 

Note: The shared blogging with Melissa Warr and Nicole Oster continues. I crafted the student essay, Melissa generated the data using her magical GPT skills. I wrote the first draft which was then edited by Melissa and Nicole.    Imagine you are a...

Cellphone in classrooms: The Saline story

From the Saline Schools, right here in Michigan, comes a video about how teachers and students are using cellphone in the classroom to enhance teaching and learning. Check it out h/t Superintendent Scot Graden's Blog

Happy Teacher’s Day (new ambigrams)

September 5 is Teacher's Day in India. It is celebrated on the birthdate of Sarvepalli Radhakrishnan, Indian philosopher and statesman who was also the first Vice-President and the second President of India. He famously said, "teachers should be the best minds in the...

Join our amazing team

Join our amazing team

Over the past year the Office of Scholarship and Innovation at the Mary Lou Fulton Teachers College, ASU has taken on a wide array of projects – everything from re-thinking how we support faculty research to reimagining what a computer labs can be; from building cool...

TPACKed and ready to go

I am off to the Netherlands, specifically to Twente University to talk and discuss TPACK and other interesting stuff. I have been invited by Dr. Joke Voogt, Associate Professor at the Department of Curriculum Design and Educational Innovation from the Faculty of...

When does the brain make up YOUR mind?

When does the brain make up YOUR mind? Does this question make any sense? Anyway, this was prompted by an article that showed that "Researchers using brain scanners could predict people's decisions seven seconds before the test subjects were even aware of making...

Banksy’s biggest trick OR why I hate art museums

I have been a fan of Banksy, the subversive British street artist, for a long time. I love the visuals he comes up with, the subversive quality of his art and most importantly his ability to take art out of the galleries into the real world. His most recent trick,...

Curt becomes Bonk (and vice versa)

Curt Bonk is Professor of Instructional Systems Technology in the School of Education at Indiana University. Curt is one of the most fun academics I know. He is also a good friend. That's us at the COSN conference earlier this year. What I didn't remember was that...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *