This semester I am leading a group of students through their Education Doctorate program, and right now they are deep in the work of crafting their dissertation proposals. A proposal is essentially a coherent argument built across three chapters: identify a problem of practice, develop a theoretical framework that explains it, design an intervention to address it, and build a methodology with research questions, data sources, and analysis methods to evaluate whether it worked. Each chapter has to stand on its own, and all three have to cohere as a single argument.
For most students, getting this right is hard, and not for want of effort or ability. A dissertation is a weird genre, and something you (most probably) do just once in your lifetime. Furthermore, EdD students are typically working professionals juggling full-time jobs, families, and everything else life throws at them, while simultaneously working on their doctoral degree. It is no surprise that in their drafts, theory often floats free of the observed problem. Interventions don’t follow from the framework. Data sources don’t map onto research questions. The pieces are there, but the connections between them aren’t yet visible, often even to the student who wrote them.
This is something as advisor that I spend a lot of time on—working on aligning the pieces together. It requires, amongst other things, being able to put yourself in the student’s context, and helping them see the through-lines. And this can be challenging.
Late one night, just a few days ago I was sitting with one of my student’s emerging proposals, sensing the misalignment but struggling to articulate it in a way that would actually be useful to them. And that’s when it hit me: this is exactly what AI is well suited for. Not writing the dissertation (the thinking is the student’s and has to stay that way) but reading what the student has already written and making the gaps visible. Reflecting the argument back with enough structure to show where it isn’t yet holding together.
After working through the idea with Claude, I developed two prompts that I’ve now shared with my students. The first analyzes alignment between the problem of practice, theoretical framework, and intervention design. The second maps research questions against data sources. Both produce tables and gap analyses that give students something concrete to work with and to bring into their advisor conversations. The prompts are below. Use them, adapt them, share them.
[PROMPT 1]: Theory to Intervention Alignment
What this prompt does: This prompt analyzes whether the dissertation has a coherent logic chain from the observed problem to the theory that explains it, to the intervention designed to address it. It produces a table that makes visible where connections are strong and where they may need development. Link to Google Doc
[PROMPT 2]: Research Questions to Data Alignment
What this prompt does: This prompt analyzes whether your research questions are answerable with the data you plan to collect, and whether every data source is doing meaningful work. It produces a table showing which data sources address which research questions, helping you see coverage gaps or instruments that may not be earning their place. Link to Google Doc
A closing thought
You’ve probably heard of the AI alignment problem, most vividly illustrated by the paperclip maximizer thought experiment. An AI tasked with making paperclips, if powerful enough, eventually converts all available matter into paperclips, including us. The horror isn’t the paperclips. It’s that the system optimized perfectly for an ill-defined goal. That gap between what was requested and the actual goal is what researchers mean by alignment failure.
Education has its own version of this. For decades we treated the well-structured essay as a reliable proxy for thinking and understanding. We optimized for the output, graded the output, built entire pedagogies around producing the output. Then generative AI arrived and drove a wedge through that assumption. You can now have the output without the thinking, and suddenly the proxy is broken. This is disorienting, and rightly so. It exposes a conflation we never fully examined.
What struck me, while writing this post is that the dissertation coherence problem, where theory, problem, intervention, and data all need to align, is yet another version of this: a very human alignment failure that long predates AI. And it turns out the same capability that broke education’s first alignment problem is pretty useful for addressing this one. Same tool, different direction, different purpose.



0 Comments