I was of three minds,
Like a tree
In which there are three blackbirds.
—Wallace Stevens, “Thirteen Ways of Looking at a Blackbird”
I thought of Stevens’s three minds earlier today when I looked at my calendar for the upcoming week. That’s what I usually do on Sundays, just making sure there are no surprises coming up. And it was when I was scanning the Tuesday and Wednesday calendars that I noticed something interesting—three different, unrelated, meetings in twenty-four hours, that connected together in strange ways. Essentially, two dissertation defenses bracketed the days with a public meeting sitting in the middle, all this within, almost exactly, twenty-four hours. And frankly if they hadn’t been sitting next to each other I would not have realized how just crazy their juxtaposition was, and would not have spent the past couple of hours writing this post (instead of preparing for the week, I must add).
Tuesday afternoon is the first dissertation defense, broadly on the topic of learning with AI. In essence the student is presenting a set of studies about how large language models can personalize educational texts for different readers. It draws on reading comprehension theory, uses natural language processing to evaluate whether AI-generated adaptations actually align with cognitive needs, and runs experiments across multiple models and prompting strategies. It is careful, systematic, technically sophisticated work. And not a single human reader was involved. Every experiment happens at the level of the machine and the metric. The knowing here is computational. Truth means the measures move in the direction the theory predicts. Rigor means the instruments are calibrated and the claims do not outrun the data.
Wednesday morning, I am off to a community class, introducing a group of non-specialists to artificial intelligence. Not the theory of it. Not the architecture. The encounter. I will do what I have done many times now, which is describe generative AI as a smart drunk intern: remarkably capable, occasionally brilliant, and fully prepared to make things up with a confidence that would be impressive if it weren’t so dangerous. My job is to help people who may have heard of it, or who have struggled to make sense of the hype (utopian and dystopian) understand and use this tool without being used by it. The knowing here is practical and immediate. There is no evaluation framework. There is just a person typing a prompt and slowly discovering that the answer is plausible and wrong, or implausible and right, or some unsettling combination of both.
We close out the 24 hours, Wednesday afternoon with another defense. This one could not be more different than the first. A STEM instructional coach, working in a school facing budget cuts and enrollment declines, designed a collaborative framework for teaching teams and then studied what happened when real teachers tried to use it. Teachers are stretched thin. And in the middle of all that, this coach tried to build something: a structured way for people to actually plan instruction together, to treat uncertainty as information rather than failure, to make collaboration hold under pressure. The knowing here is situated, embodied, pulled from observation notes and interview transcripts and the lived reality of people trying to do good work under real constraint.
Two of these three events involve AI. One does not. That asymmetry, and possible implications, may be the most interesting thing about the sequence.
The first defense asks how machines process text to serve human learning. The community class asks how humans can develop judgment about machines. Both are about AI, but they face in opposite directions: one looks at the technology from inside the research apparatus, the other looks at it from the perspective of someone encountering it for the first time. And then the third event has nothing to do with AI at all. It is about teachers and budgets and collaboration and whether the structures that hold professional work together can survive the ordinary volatility of school life.
What “rigor” means shifts with each room. In the computational study, it is precision: valid metrics, controlled comparisons, claims tethered to data. In the community class, it is honesty: am I preparing people to be critical users, or just giving them a tour of a shiny tool? In the practitioner defense, it is fidelity: does the researcher stay close to what nine weeks of observation in a single school can actually support? Evidence, too, changes shape entirely. In one room it is computational output. In another it is the moment someone watches an AI confidently but incorrectly explain a fake optical illusion. In the third it is a transcript of teachers arguing about what “success” should mean in a genetics unit. And transfer ranges from algorithmic generalizability across models, to whether a retiree goes home tonight and asks a better prompt, to whether a collaboration framework built under one set of constraints can survive the next budget cycle.
Anyone who works in education long enough ends up living in multiple registers, whether they notice it or not. We theorize, we translate for public audiences, we study practice. Most weeks the shifts blur together, one meeting flowing into the next, and you do not notice you have changed languages three times before lunch. What this particular twenty-four hours does is make the motion visible, the way a time-lapse makes you see the clouds actually moving. The motion was always there. You just could not see it.
And the question underlying all of them: how do we come to know things well enough to act on them? The computational researcher knows through measurement. The community learner knows through encounter. The practitioner knows through sustained attention to what happens when plans meet reality.
And strangely enough, of the three, it is the practitioner’s world, the one with no AI in it, that may be most at risk from AI. Budget-strapped schools, declining enrollment, teachers stretched to breaking… that is exactly the context where someone will say “just use the technology.” Use AI to personalize the texts. Use AI to replace the collaboration we can no longer afford to staff. Use AI to generate the curriculum (though AI may at best generate, what I have called “curriculum shaped objects”). The smart, drunk, overly confident, sycophantic, biased intern gets promoted to department head because the department cannot afford a sober one. And the careful, human, painstaking work of teachers learning to plan together, to sit with uncertainty, to build something that holds… that work becomes harder to justify in a spreadsheet, even as it remains the thing that actually matters.
The intern, of course, does not worry about any of this. It generates, it responds, it moves on. It has no stake in the outcome. Which is precisely why the rest of us have to.
I do not know which to prefer,
The beauty of inflections
Or the beauty of innuendoes,
The blackbird whistling
Or just after.
—Wallace Stevens, “Thirteen Ways of Looking at a Blackbird”




0 Comments