On April 22, 2025, The Washington Post reported on a draft executive order from the Trump administration that outlines a sweeping plan to embed artificial intelligence into K-12 education. The order calls for AI to be integrated into teaching practices, teacher training, federal grant priorities, and even student competitions. The message is clear: to lead the future, schools must adopt AI—urgently and everywhere.
This post is my response.
What strikes me is not how radical this vision is, but how familiar it is.
In The Evolution of Technology, George Basalla challenges the comforting myth that necessity is the mother of invention. Instead, he argues, invention often arises from curiosity, chance, or play—its utility only recognized after the fact. Jared Diamond makes a similar point in Guns, Germs, and Steel, noting that societies frequently stumble upon inventions long before they find a use for them. The steam engine, for instance, existed for amusement before it revolutionized industry. Invention, in short, frequently precedes necessity.
The world is filled with tools that were invented before we knew what to do with them. Only after they existed did we bend society—and logic—to fit them into place.
Nowhere is this more apparent today than in the rise of generative AI.
Nobody was asking for a machine that could autocomplete essays, hold pseudo-conversations, or generate images of cats dressed as Roman senators. Yet here we are. GenAI has exploded into our collective consciousness. And all of a sudden it is THE technology that will save education.
It has now entered our classrooms, syllabi, grading rubrics, and professional development workshops. The pressure to adopt it is intense. Administrators speak in hushed, urgent tones about AI-readiness; vendors promise personalized learning on demand. But all this fanfare hinges on one core trick: GenAI can talk.
It mimics the rhythms and patterns of human conversation so well that we forget—it doesn’t know what it’s saying.
GenAI’s most compelling feature is also its most misleading: it talks like us. It uses language fluently, which gives the impression that it understands. That it reasons. That it knows. But what it really does is pattern-match. It generates plausible responses without any grounding in meaning. It mimics thought without actually thinking.
This illusion of understanding is powerful, perhaps too powerful. Because it talks like us, we assume it thinks like us. But language without grounding is just that—smoke. And yet, this mirage has become the foundation for a new edtech gold rush, one that often bypasses questions of readiness, responsibility, and real pedagogical value.
Despite these fundamental limitations, let us not forget that no one was clamoring for large language models to generate lesson plans, grade papers, or simulate classroom conversations. These technologies arrived unbidden. And now, with the force of policy and the glamour of possibility, we are being told they are indispensable.
But indispensable to what, exactly?
The draft executive order treats AI not as a tool to be cautiously evaluated, but as a foregone conclusion. The invention exists—therefore we must build the necessity around it.
This is the danger. When we retrofit educational “needs” around technological capabilities, we risk redefining education itself around what machines can do, rather than what students and teachers actually need. The consequences are real: shallow learning, increased surveillance, devalued teaching, and a profound confusion between fluency and understanding.
We should be wary of mistaking linguistic mimicry for insight. The technology remains brittle – it fabricates facts, lacks conceptual understanding, and reinforces existing biases. We know this. And yet, the pressure to use it grows not because it solves well-defined educational problems, but because it seems like magic. But linguistic polish is not pedagogical readiness.
Sometimes invention is the mother of necessity. Other times, necessity is an illusion—crafted after the fact to justify a tool that dazzles more than it delivers.
We don’t need to reject GenAI wholesale. But we do need to ask: what is being solved here, really? And who decided it was a problem in the first place?
This really changed the way I think about this subject.