I had the pleasure, this morning, of participating in a panel discussion organized by the Center for American Progress, titled Leveraging Technology To Equip K-12 Students for Success. Although the title covered a broad view of technology, our focus was specifically on the role of generative AI in education. (No surprise there!).
The panel included Kevin Johnstun from the U.S. Department of Education, Dr. Jacqueline Rodriguez from the National Center for Learning Disabilities, and myself, with Dr. Nicol Turner Lee from the Brookings Institution serving as the moderator. We covered a lot of ground particularly around the potential of AI to address educational challenges and how it can create equitable learning opportunities.
A huge shout out to Weadé James and her team for the invitation and all the work that went into making the event successful. The video of the webinar can be found below, as well as a couple of responses to audience questions that I responded to offline, since we ran out of time.
A couple of questions from the audience that I responded to offline are given below.
- What do you recommend state lawmakers focus on in upcoming legislative sessions to best leverage technologies and support students and teachers?
Lawmakers should prioritize two key areas around AI technology in education.
First, there needs to be robust independent oversight of AI models. This may mean establishing independent research bodies, funded by the government but operating autonomously, to evaluate AI systems, what they can do and what they can’t. These organizations would have mandated access to assess the capabilities and limitations of AI models rather than relying solely on claims from AI companies.
Second, we need comprehensive transparency requirements for educational technology companies in the K12/Higher Ed space. Companies should be required to disclose which specific AI models they’re using in their products – currently, there’s no way to know if they’re using OpenAI, Gemini, or proprietary models, which means there is no way of knowing what these models can or cannot do. There are many definitions of AI (from machine learning algorithms to generative AI, from intelligent tutoring systems and more). These are very different from each other and this needs to be clearly described. There of course need to be explicit descriptions of privacy policies around student data collection, usage, and storage must be mandated, along with specific guidelines for how student data can be used in model training.
These regulations would promote accountability while preserving innovation, enabling informed decisions about AI integration in education. By focusing on understanding and responsible implementation rather than restricting development, these measures ensure we know what we’re working with without constraining companies’ ability to innovate and improve their products.
- Can you please discuss key pros and cons of GenAI approaches such as Khanmigo in tailoring education for individual students (e.g., as a supplement to standard classroom work?)
The potential of AI tools like Khanmigo in personalizing education requires careful consideration of at least three critical factors.
First, we must distinguish between student-driven personalization and AI-determined lockstep curricula. While AI can support individual learning paths, it should enhance student agency rather than perpetuate rigid educational structures that have historically underserved many learners.
Second, the current lack of transparency around these AI models raises significant concerns. Without independent research access, we cannot adequately assess potential biases or inaccuracies – a particular concern for students who are still developing critical evaluation skills and may accept AI outputs without question.
Finally, we must acknowledge that learning is fundamentally social. While AI tutors may offer certain benefits, we risk overemphasizing one-on-one instruction at the expense of collaborative learning experiences that research shows are crucial for motivation and engagement. The often-cited “2-sigma” effect of individualized tutoring, while compelling, hasn’t fully withstood scrutiny. While AI tools can certainly enhance education, treating them as a panacea risks diverting attention from more fundamental educational challenges that require systemic solutions.
- With screen time rising, how can technology be integrated in ways that promote mental well-being and limit negative impacts on students’ attention spans and physical mental health? Having said that, why are we pushing technology on our children?
The question of technology’s impact on students requires reframing the discussion beyond simple screen time metrics. The key distinction lies in how technology is used. When technology enables learning and creation, it serves a valuable educational purpose. It’s the mindless scrolling and negative behaviors, such as cyberbullying, that we need to guard against. Learning involves both engaging with the world and constructing frameworks to understand it – technology can support both these aspects when used thoughtfully.
However, we urgently need stronger regulations governing how these technologies are marketed and distributed to young people. Our current approach of implementing safeguards after problems emerge – as seen with recent Instagram regulations – is insufficient. With the rapid advancement of generative AI, we need proactive policies that anticipate and prevent potential harm rather than just responding to crises. We must establish guardrails before these technologies become even more deeply embedded in educational contexts.
0 Comments