It started with a rejection. That’s nothing new – we academics collect rejections like kids collect Pokemon cards (or whatever it is that they collect these days). But rejection, if it must come, must be for the right reasons.
This particular rejection hit differently, not because of the rejection per se. I have been an academic too long for that to bother me. But rather the reasons for the rejection and what it revealed about our field.
Here’s what happened: Nicole Oster and I submitted a theoretical piece to be presented at the annual conference of the American Educational Research Association (AERA). Our paper focused on the psychological reality of generative AI, and what it meant for education. We argued that AI isn’t just another technology – it’s fundamentally different because it activates our theory of mind in ways no previous technology has never been capable of. We see it as psychologically “real” in a way that transforms the educational landscape. As any reader of this blog knows, I think this is an extremely important issue and one that has not received the attention it deserves. Hence the submission.
So what did the reviews say? The reviews were fascinating – not for what they criticized, but for what they assumed. One reviewer put it bluntly: “The paper has more the style of an opinion than of a research paper.” Even those who praised our work couldn’t quite cross the empirical divide. “Your study is well-grounded in a robust theoretical framework,” wrote one reviewer, “offering significant insights into the potential impact of genAI on educational landscapes.” But then came the inevitable “however” – “consider incorporating empirical evidence, such as case studies or experimental findings, to support your theoretical claims.” Another noted our “theoretical rigor is commendable” but worried about the “lack of empirical data.” The pattern was clear and consistent: good ideas, important insights, but where’s the data?
Think about that for a moment.
How exactly does one gather empirical evidence about the psychological reality of a technology that is fundamentally reshaping our relationship with knowledge itself? What dataset would capture the profound ways AI is transforming the social-emotional landscape of youth, within and outside of our classrooms?
This reminds me of a piece my colleague Michael Barbour recently shared on LinkedIn: Alan Wolfe’s 2016 essay, titled “The Vanishing Big Thinker” from The Chronicle of Higher Education. Wolfe argued that the triumph of the graduate-school model has led to an overemphasis on specialized research and technical methodologies at the expense of broader humanistic inquiry. Our fields, he argued, are becoming increasingly specialized and self-referential, writing primarily for other specialists rather than engaging with broader ideas.
Sound familiar?
My friend John Curry recently wrote an editorial in TechTrends (Where have all the cowboys gone?) that hits this same nerve. He reminisces about a time when our field had researchers willing to challenge themselves and each other through academic discourse. When the field was EXCITING. When we grappled with BIG IDEAS.
The irony isn’t lost on me. At precisely the moment when we need big thinking most – when AI could be fundamentally reshaping education, learning, and human relationships – we’ve become more focused on methodology than meaning.
Methodology over meaning!
Don’t get me wrong. I’m not dismissing empirical research. Data matters. Methodology matters. Analysis matters. But they can’t be all that matters. If McLuhan were submitting “Understanding Media” today, would we reject it for lack of empirical evidence? Would we tell Postman that “Amusing Ourselves to Death” needs more data points? We’re so busy measuring the trees that we’ve lost sight of the forest.
Because here’s the thing: techniques, methods, data, analysis – they’re all essential tools. But tools for what? They’re meant to help us understand this complex reality of learning, education, media, culture, and technology we’re swimming in. When they become obstacles to understanding rather than pathways to it, something’s gone wrong.
I want to be clear. My AERA rejection isn’t the problem. Maybe my ideas did not deserve to be accepted for the conference. Maybe the ideas were just wrong, and the arguments weak, or some combination thereof. That I can live with. But to be rejected based on criteria that are not relevant to the issues being discussed seems wrong. I mean, here we are, in the middle of a technological revolution that’s potentially transforming human cognition or society itself, and we’re arguing about sample sizes. (That is an exaggeration, but you know what I mean.)
It is also important to ask ourselves what kind of message this is sending to future scholars like Nicole. Does this just reinforce certain ways of being an academic and deemphasize others, perpetuating exactly what John decried in his editorial?
So maybe it’s time to remind ourselves why we got into this field in the first place. Not just to gather data, but to understand. Not just to measure, but to illuminate. Not just to analyze, but to envision. Not just to measure but to design.
And also, if we can’t think big thoughts in academia, where can we do that?
Note about the title image: The title image for this post was created using a combination of ChatGPT and Adobe Photoshop. Building on something that Nicole Oster had shared with us, I found an image online (you can see the original here) that I liked but didn’t want to use as is, without permission.
I then asked ChatGPT to describe the image in great detail and used that description as a prompt to create a new image. I then exported that image to Photoshop to make some tweaks, for instance widening its ratio to be 16:9 and adding academic robes and a mortar board to the silhouetted figure. The final composition of the text and image was done in Keynote.
0 Comments