Prompts vs. Principles: Contrasting OpenAI’s Study Mode to Real Educational AI

by | Thursday, August 07, 2025

In which I examine OpenAI’s much-hyped “Study Mode” and contrast it with a couple of real research-based approaches. The differences are telling. Read on…

OpenAI recently announced “Study Mode” for ChatGPT with considerable fanfare, claiming it was built “in collaboration with teachers, scientists, and pedagogy experts” and grounded in “longstanding research in learning science.” The marketing copy is impressive, words like “scaffolded responses,” “metacognition,” and “personalized support” pepper the announcement, painting a picture of sophisticated educational technology.

But when you look under the hood, what you find is… a prompt. A reasonable prompt, to be sure, but nothing that a random intelligent layman with some basic knowledge of learning could not have developed. Essentially it is a set of instructions telling the AI to ask guiding questions instead of giving direct answers. While multiple sources have reported what appears to be the system prompt for Study Mode (though OpenAI hasn’t officially confirmed this), the revealed instructions read more like teaching tips than groundbreaking educational technology.

[Note: I’ve attempted to verify the authenticity of the reported system prompt across multiple sources, and while there’s consistency in what’s being shared, this may not be the complete or exact prompt used. This caveat aside, the fundamental point about the approach remains valid.]

The prompt includes some basic educational concepts that sound reasonable on the surface. It mentions Socratic questioning, tells the AI not to do homework for students, and emphasizes building on existing knowledge. 

But it also has significant flaws that reveal the limitations of treating complex educational challenges as prompt engineering problems. The emphasis on being “warm, patient, and plain-spoken” plays directly into AI’s well-documented tendency toward sycophancy—agreeing with users and avoiding necessary friction that actually promotes, and may even be essential, for learning.

Moreover, as an article in MIT Technology Review observed, “underneath the hood, it is not a tool trained exclusively on academic textbooks and other approved materials—it’s more like the same old ChatGPT, tuned with a new conversation filter…” And, the article continues “…because of the way AI works, you can’t expect it to distinguish right information from wrong.”

What we have here is what I’ve elsewhere called a “curriculum-shaped object“—something that mimics the surface features of quality educational interaction while lacking any real depth.

Moreover, like all LLM-based systems, Study Mode can be easily subverted. Users can simply ask it to ignore its instructions, role-play scenarios, or frame homework as something else entirely. Many such examples are already online.

And of course there is the always available option to just ignore it and use ChatGPT instead.

OpenAI’s announcement conveniently omits these fundamental limitations, presenting what is essentially a fragile prompt as robust educational technology.

It is easy to rant at OpenAI (as I tend to do, for various reasons). But rants though emotionally valuable, go only far. As important it may be to call them out for their laziness, it is as important to point to better solutions and ideas.

I want to contrast Study Mode with two examples that show how this technology can actually be used in thoughtful, productive ways. These approaches are not perfect, mainly because they are built on an imperfect technology, but they do show how thoughtful intentional design, guided by research and good practices, can help create powerful tools for learners.

Two quick notes: these aren’t the only good examples of educational AI—there are surely many others. And full disclosure: I know all these folks personally, so this is admittedly a convenience sample.

The Educator Discussion Bot: Grounding AI in Educational Theory

My friends, Catheryn Reardon and Jim Dunnigan, created the Educator Discussion Bot, designed to help educators navigate difficult conversations with students, parents, and colleagues. But here’s the key difference: instead of writing a clever prompt and calling it a day, they built a comprehensive knowledge base grounded in seven distinct theoretical frameworks, including ASU’s Principled Innovation Framework; Carl Rogers’ humanistic psychology; Fishbein & Ajzen’s Reasoned Action and Planned Behavior models; Appreciative Inquiry; The ADKAR change management framework; Transformative Learning Theory and Social Cognitive Theory.

This means they don’t just depend on the prompt but rather ground the interactions with the bot on a foundation of actual educational and psychological research, with multiple branching practice scenarios that reflect real-world dilemmas. The bot is embedded within a website that provides users with essential context, guidance, and research-informed support. It offers multiple branching practice scenarios that mirror real-world dilemmas, helping build confidence and skill through interactive experience and structured reflection.

KondoBot: RAG-Optimized Knowledge for Executive Function

Steve Salik’s KondoBot takes a different but equally rigorous approach. Designed to support students’ executive functioning skills (time management, task initiation, organization, and planning), KondoBot uses retrieval-augmented generation (RAG) with a carefully curated knowledge base focused on executive function and student success in higher education.

But here’s what sets this apart: Salik didn’t just throw research papers into a database. He developed a systematic process for creating “RAG-optimized abstracts” from primary source documents, each following a structured format with sections for core concepts, applications, usage scenarios, common pitfalls, and practical tips. Each document is tagged with metadata and organized using a modular system that allows the AI to retrieve precisely the right information for each student’s specific challenge.

The knowledge base covers everything from Executive function strategies for planning, prioritization, and focus management to academic success practices for overcoming procrastination and structuring study routines; from practical tools and templates for goal setting, scheduling, and professional communication to coaching and mentorship support grounded in research and aligned with educational values.

In both cases, these bots represent months of careful curation, systematic organization, and iterative refinement—the kind of painstaking work that produces genuinely useful educational tools.


Here’s what frustrates me: OpenAI has the resources to do this kind of work. With their billions in funding and access to educational experts, they could have built comprehensive, research-grounded knowledge bases. They could have partnered with educational institutions to understand real classroom needs. They could have developed tools that genuinely advance the field of educational technology.

But as Benjamin Riley points out in his recent substack post, Study Mode is just one part of OpenAI’s broader “moonshot” in education—a strategy that includes partnerships with teachers’ unions to create AI training facilities and embedding ChatGPT directly into learning management systems like Canvas. Riley argues this represents OpenAI’s intrusion into education systems, driven more by financial necessity (students don’t pay for subscriptions, but institutions do) than genuine educational purpose.

The implications go far beyond a single prompt. When a company with no educational expertise positions itself as the “world’s largest learning platform” while systematically integrating into the infrastructure of education, we should be asking harder questions about what we’re allowing into our classrooms and why.

Instead, they wrote a prompt, wrapped it in marketing language about learning science, and called it innovation. Jim, Catheryn and Steve (and countless other educators at ASU and elsewhere) show us what ethical AI development actually looks like: deep engagement with educational theory, systematic knowledge organization, iterative refinement based on real-world use, and genuine commitment to supporting learning rather than just completing tasks. It is more work, but of the good kind.

Topics related to this post: Essay

A few randomly selected blog posts…

Tech Integration Models and GenAI: Podcast Episode (Part II)

Tech Integration Models and GenAI: Podcast Episode (Part II)

Last week, I shared information about my participation in the Superspeaks | Microsoft EDU podcast on the BAM Radio Network. The discussion focused on technology integration frameworks in the context of Generative AI, featuring a panel of educational technology...

COVID19 & Education

COVID19 & Education

The COVID19 crisis has disrupted education globally at an unprecedented scale. In some ways, we are living through the largest educational social experiment in history! Over the past year I have been involved in a range of initiatives, discussions, interviews, and...

Generative AI in Education: Keynote at UofM-Flint

Generative AI in Education: Keynote at UofM-Flint

A couple of weeks ago I was invited to give a keynote at the Frances Willson Thompson Critical Issues Conference on Generative AI in Education. It was great to go back to Michigan even if for a super short trip. One of the pleasures of the visit was catching up with...

Darwin Day & A new Gallup Poll

Charles Darwin 12 February 1809 – 19 April 1882 On this day, it is sobering to read the results of the latest Gallup Poll: On Darwin’s Birthday, Only 4 in 10 Believe in EvolutionOn the eve of the 200th anniversary of Charles Darwin's birth, a new Gallup Poll shows...

Digital footprint

My colleague Leigh Wolf shared with me an assignment completed by one of her students (Allison Keller) in a technology and leadership class she is currently teaching. How one person's use of technology has changed over time. [Hosted on Flickr] Click on the image to...

Contemplating Design: Remixing the 5 spaces framework

Contemplating Design: Remixing the 5 spaces framework

The Five Spaces for Design in Education framework argues that design in education happens in 5 interrelated spaces: artifacts, processes, experiences, systems and culture. We have typically represented this as follows. We, however, are also very aware that any...

Teaching in the Age of AI: Reflections from EDULEARN25

Teaching in the Age of AI: Reflections from EDULEARN25

I was recently invited to the 17th annual International Conference on Education and New Learning Technologies (EDULEARN25) in Palma, Spain. Getting to visit beautiful Palma, Mallorca, speaking with 800+ educators from across the world... what could be more awesome? At...

Technology Integration 2.0 — was TPACK 😉

The recently concluded NECC conference had quite a bit of TPACK related presentations. Sadly neither Matt nor I could make it to NECC... maybe next year! One I discovered just today (h/t @mhines on twitter) was one titled School 2.0 & Understanding by Design....

Books on visualization & info-graphics

There was a recent query on the PhD-Design-List regarding sources for designers on how to make good info-graphics and data-visualizations. I am collating the options being put forward by people here, just for the record. Manuel Lima's work  The book: Visual...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *