Summary: In which I explore three sticky metaphors—digital plastic, curriculum-shaped objects, and the endlessness of edge cases—and how they illuminate the risks of AI-powered education tools that look like learning but fail to teach.
In a previous blog post (The Mirror and the Machine: Navigating the Metaphors of Gen AI) I had written the following about metaphors and AI:
Metaphors are essential – they are the cognitive bridges that allow us to understand new and complex concepts by relating them to familiar experiences… If we see AI as just a fancy calculator, we’ll miss its creative potential. If we see it as an omniscient oracle, we’ll fail to apply appropriate skepticism… Our metaphors become our reality.
In other words, metaphors don’t just describe—they shape how we think, what we notice, and what we might miss. Sometimes they constrain our vision, but when they’re sharp and true, they can suddenly make visible what was always there but hard to see. Words matter and this post is about three pithy sets of words that have been living in my head, individually and collectively, illuminating some key facets of our current AI moment in education.
The first is the idea of “Digital Plastic:” This came from Leon Furze’s observation that AI output functions like “digital plastic“—technically functional, often useful, but potentially catastrophic for the environment it inhabits.
The second is the idea of “Curriculum-shaped objects:” This one was a modified version of a phrase I had read in a 404 Media article titled The Media’s Pivot to AI Is Not Real and Not Going to Work. As Jason Koebler writes, describing the current state (and future of journalism):
… AI is destroying traffic, ripping off our work, creating slop that destroys discoverability and further undermines trust, and allows random people to create news-shaped objects that social media and search algorithms either can’t or don’t care to distinguish from real news.
The evocative phrase, “news-shaped objects,” popped out to me, as a description of news stories that possess all the visual markers of journalism while containing none of its substance.
This is digital plastic masquerading as news. And the educational equivalent of that is, what I am calling, curriculum-shaped objects.
Finally, we have the idea that “The Edge Cases are Endless:” This surfaced in Mark Bishop’s critique of LLMs (Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It), in which he ended his article with a quote from a Waymo report about self-driving cars:
There are times when it seems autonomy is around the corner and the vehicle can go for a day without a human driver intervening … other days reality sets in because the edge cases are endless.
The idea that edge cases are endless captures something critical about this meeting of AI and the real world, multiple simulacra confronting the original.
So here we have it – three sets of words leading to the following thesis: The edge cases of real teaching are endless, which is why AI-generated “curriculum-shaped objects” remain mere digital plastic – superficially functional but ultimately harmful to the learning ecosystem.
These phrases and ideas had been bouncing in my head until I read Jennie Dougherty’s critique of Google’s recent sudden deployment of 30 untested AI tools into Google Classroom. Her blog post Default On; Quality Off: Google Classroom’s New AI Tools brought it all together for me. Here was the perfect case study of how these three concepts intersect in education.
To give some context, a couple of weeks ago Google decided to release 30-odd AI tools, all set to “default on” with zero advance notice to educators. This was frustrating enough, but Dougherty’s deep dive into their actual pedagogical quality revealed something more troubling. These weren’t just rushed products they were tools for spitting out what we might call “curriculum-shaped objects.” Like their news-shaped cousins, they possess all the visual markers of good teaching tools while containing none of their pedagogical value. They are, in essence, educational Styrofoam: lightweight, convenient, and potentially disastrous for the learning ecosystem.
My experiments (also here) with fooling LLMs with a fake optical illusion is a great case in point. These models do not get tricked by actual illusions—they get tricked by non-illusions that masquerade as illusions. For instance, they consistently suggest that the two red circles below are identical in size! This is because this image is, if I may, a “illusion-shaped object.” It has surface similarity with the Ebbinghaus illusion, enough to fool the AI.

So Google’s AI churns out “curriculum-shaped objects” that mimic the surface features of quality educational materials while lacking any pedagogical depth.
Dougherty provides example after example, and I do recommend reading her post in full but just to take one example. Consider Google’s “differentiation” tool. Instead of providing sophisticated scaffolds that help all students access the same high-level thinking, the tool simply makes tasks easier. As Dougherty puts it, “It’s the educational equivalent of suggesting that the best way to help someone climb a mountain is to make the mountain shorter.”
This reminds me of something I tell all my students (and myself) about writing: That our goal is to make things simple, not simplistic. The first is an attempt at communication, while the other is deception.
Moreover, these forms of curriculum-shaped “digital plastic” are not just mediocre, they’re often highly stereotypical, along a range of dimensions. They have been trained on what one can call the fossilized remains from decades of “objectives, materials, procedures, assessment” templates, and no surprise, their outputs reflect this limitation. However much we may speak of transforming education, these tools do more to sustain the existing models than to change it. They industrialize inefficient pedagogical practices at scale.
And here’s where Furze’s digital plastic metaphor becomes particularly apt: this isn’t just one problematic tool. Teachers can now generate endless variations of the same pedagogically shallow content at the push of a button. Need multiple “differentiated” activities? Each output will be slightly different in surface details but identical in its educational limitations.
And this “surface similarity” comes at a cost.
That is because teaching and learning is anything but homogeneous.
Education (as most forms of human activity) are inexhaustible sources of edge cases. Even if these AI outputs were accurate (which they often are not) and of good quality (again they are not), they would still be challenged by the irreducible complexity of actual teaching. Real education happens in the margins—the moment when a student’s confusion reveals a deeper misconception, the cultural context that makes a historical example land differently, the split-second decision to abandon your carefully crafted lesson because something more important just emerged.
These are the very moments that no training data captured, the human responses that exist only in the spaces between standardized lesson plans. The edge cases are indeed endless, and they’re precisely where learning lives.
And yet here we are, with curriculum-shaped objects being carpet-bombed into millions of classrooms. Just as microplastics have quietly infiltrated our food chain, these digital artifacts are beginning to accumulate in our educational ecosystem. A bland exemplar here, a dumbed-down differentiation there, each one technically meeting some superficial standard while slowly poisoning the well of genuine pedagogy.
Perhaps this is why the metaphors and language matter so much, and why these three sets of words (digital plastic, curriculum-shaped objects, and the fact that edge cases are endless) and the ideas they convey have been sitting with me for a while. They have helped me see clearly and I hope they help you as well. This is the first step in the long journey to making it better.








Dear Prof Mishra,
Thank you for a thought-provoking piece bringing together 3 metaphors to question the nature of AI use in education..
Metaphors are powerful teachers, thought provokes… My favorite is one where Mulla Naseerudin is looking for his ring below the lamp though he has lost it elsewhere in a dark undergrowth, because that is what he can see.
We do AI in Education ‘because it is there’, is the easy thing to do (looking for the ring in thewrong place), rather than the right thing..
Thank you for this thought-provoking article. Indeed, metaphors are powerful ways to understand and internalize ideas. The combination of the 3 metaphors does build a compelling argument
Ensuring we keep the digital learning space as clean as possible (plastic ban?) is a responsibility for educators, I feel.