In my previous post (From Shortcuts to Simulations: Two Contrasting Uses of AI in Higher Education), I shared how my colleague Jim ingeniously used AI to prepare for a difficult conversation about academic integrity. Today, I want to explore another fascinating application of AI in education: simulating conversations to address alternative scientific conceptions.
Decades of research have revealed the persistent alternative conceptions students hold about scientific phenomena. These aren’t mere knowledge gaps; they’re robust, intuitive understandings that often resist change and can significantly hinder learning. From teleological views of evolution to geocentric models of the universe, these deeply held beliefs present a unique challenge for educators.
For teachers, addressing these alternative conceptions is no small feat. The “curse of knowledge” often blinds us to the perspective of a novice learner. Even armed with research on common misconceptions, it’s difficult to truly grasp how students perceive the world differently, let alone devise effective strategies to address these views in a classroom setting.
Enter generative AI. By role-playing students with specific misconceptions, AI offers educators a unique opportunity to engage in simulated classroom conversations. These AI-powered interactions provide a safe space to practice, fail, and reflect on our approaches to addressing alternative conceptions.
Intriguingly, the generative nature of AI—its capacity to “hallucinate” or generate novel responses—becomes a strength in this context. These simulations allow us to experience the persistence and variability of alternative conceptions, mirroring the challenges we face in real classrooms.
To explore this idea, I embarked on a series of simulations with ChatGPT. Playing the role of a science teacher, I engaged with AI “students” holding various alternative conceptions. We delved into classic examples: the notion of evolution as a goal-driven process, the misconception that seasons result from Earth’s varying distance from the sun, and the age-old belief in a geocentric universe.
For instance, here is a sample prompt:
I want you to be a beginning science learner who has some key misconceptions about science. For instance, you think that Darwinian evolution is a teleological process. I will play a science teacher who is trying to understand your misconceptions by asking you questions. Answer truthfully based on your misconceptions. Let’s do this 4 times and at the end tell how well I did. Is that clear?
The results of this exercise (and others) were both illuminating and humbling. In one simulation, I engaged with “Alex,” a 7th grader who believed that fish “decided” to evolve and live on land. Alex’s responses were reminiscent of what research has shown about how students respond in such situations:
I think when the fish wanted to explore land, somehow evolution knew that and started to slowly change them so they could live there,” Alex confidently asserted.
I probed further, asking how evolution could “know” something. Alex’s response underscored the teleological thinking that’s so common among students:
“It’s like evolution has a purpose to help animals become better and survive in new or changing places by giving them the tools or body parts they need.”
That said, there were moments where I felt I had succeeded in breaking through. In one particularly revealing exchange, I asked the AI “student” how a fish would know what land looked like or that it would be a good place to live. This question seemed to momentarily stump the simulated learner, causing a noticeable pause in their typically confident responses. It was a moment that mirrored real-life interactions where students encounter the limits of their current understanding. The AI, much like a real student, struggled to reconcile its belief in goal-directed evolution with the logical implications of a fish’s limited perspective. Such moments of cognitive dissonance, whether in a real classroom or an AI simulation, often serve as critical junctures for learning and conceptual change.
These simulations drove home another crucial point: simply presenting correct information often isn’t enough to shift deeply held alternative conceptions. In fact, I had to resist falling into the trap of trying to directly correct a student’s belief rather than exploring the reasoning behind it.
There were also moments of frustration when it just seemed impossible to get through. For instance, in another simulation, I faced off against “Lucy,” a confident 7th grader who was convinced that the sun revolves around the Earth. Despite my best efforts to explain the Earth’s rotation, Lucy stood firm:
“I’m sticking to what I see, and I see the sun moving, not the Earth!”
This exchange underscored the challenge of addressing deeply held alternative conceptions, even when presented with contradictory information. It also showcased the realistic stubbornness that the AI simulation was able to portray, mirroring real-world interactions with students who hold strong beliefs about scientific phenomena.
My interaction with “Lucy” led to an additional humbling realization. As I scrambled to find real-world examples to support my argument, (searching on Google in a different browser window) I was struck by how few accessible, grade-appropriate demonstrations actually exist to “prove” that the Earth moves. The one example that kept coming up (that of the Foucault’s pendulum), while compelling, was far beyond a 7th grader’s comprehension.
This brought home a sobering truth: much of the science we teach in schools relies heavily on “take my word for it” rather than direct, observable evidence. Here I was, hundreds of years after the Copernican revolution, struggling to convincingly explain one of its fundamental tenets to a curious student. It was a reminder of the complex nature of science education. We often take for granted that certain scientific “facts” are self-evident, when in reality, the path from observation to scientific understanding is far more nuanced and challenging than we typically acknowledge in our classrooms.
That said, it was also clear to me that this method of using AI to simulate difficult conversations about scientific concepts offers several advantages. It provides a low-stakes environment for teachers to practice various approaches without the risk of negatively impacting real students. The AI’s ability to maintain consistent “student” responses allows for repeated attempts, enabling educators to refine their strategies over time. Furthermore, these AI “students” can take on different personalities—garrulous, monosyllabic, bored, argumentative, and more—providing teachers with opportunities to engage with the variety of conversational patterns they’ll encounter in real classrooms. Moreover, these simulations help teachers anticipate and prepare for common student arguments and beliefs they might encounter in actual classroom discussions. Perhaps most valuably this approach offers immediate feedback on the effectiveness of different teaching strategies, allowing for rapid iteration and improvement in addressing alternative conceptions.
Of course, AI simulations can’t fully replicate the nuances of real human interactions. But they do offer a valuable tool for educators to refine their approach to addressing alternative conceptions.
As I reflect on these simulations, I’m reminded of the following quote:
There are naive questions, tedious questions, ill-phrased questions, questions put after inadequate self-criticism. But every question is a cry to understand the world. There is no such thing as a dumb question ~ Carl Sagan
Sagan’s wisdom resonates deeply with our understanding of alternative conceptions in science education. These are not errors or misunderstandings to be corrected, but worldviews that students have developed based on their interactions with the world. In many ways, these students are being scientific in their own right, developing explanations for the phenomena they observe.
The process of science education, then, mirrors the self-correcting nature of science itself. Who’s to say that our current “correct” formulations will stand the test of time? What we truly have is an ongoing dialogue—a conversation with nature and our evolving understanding.
Science is not a static body of knowledge, but a dynamic process of questioning, observing, and refining our models of the world. In this light, AI simulations offer us a powerful tool to engage with this process. They allow us to simulate not just student-teacher interactions, but the very essence of scientific inquiry: the constant interplay between observation, explanation, and revision.
0 Comments