As we continue to grapple with the hype and transformative potential of generative AI in education, I find myself revisiting a point I’ve made before: the most significant impacts of this technology may not be within the classroom walls, but in the world that surrounds them. And that in turn will transform the classroom in ways that we may not be prepared for.
In a previous post, I reflected on the lessons we should have learned from the social media revolution. And, speaking personally, I do not want to make the same mistakes again. I argued that we need to look beyond the immediate applications of new technologies and consider their broader societal implications. As I wrote then:
Today when we stand at the cusp of a new technology, I try NOT to let that mistake happen again. Hence, when it comes to generative AI, I am both excited by it but also wary of it.
The part that I have focused on, in the recent past, has been on the “agentic” nature of these technologies, and the possible consequences of that. That whether we like it our not we will anthropomorphize them, and that we will develop synthetic relationships with them. As I wrote in a recent post (Beavers, Brains & Chat Bots: Cognitive Illusions in the Age of AI
The consequences of this instinctive anthropomorphization are far-reaching and potentially profound. As we interact with increasingly sophisticated AI, we open ourselves up to unprecedented levels of emotional manipulation. The socio-emotional development of younger generations, growing up with these technologies, may veer into uncharted territory. Our ethical frameworks, designed for human-to-human interaction, become muddied as we grapple with treating non-sentient entities as moral agents. I have written about the advent of AI social influencers and how these agentic technologies will increasingly blur the distinction between real and artificial relationships. While these synthetic relationships and their consequences are concerning, they represent just one facet of AI’s impact on our educational landscape.
Recently, I encountered an even more alarming development that underscores the urgency of addressing AI’s broader implications.
In a recent piece in The Atlantic (titled: High School Is Becoming a Cesspool of Sexually Explicit Deepfakes), author Mateo Wong describes adisturbing trend: the use of generative AI to create nonconsensual, sexually explicit images and videos of children. This is not an isolated phenomenon but a widespread issue affecting millions of students across the nation.
For instance, The Center for Democracy and Technology’s latest report reveals that in the past school year, 15% of high schoolers reported hearing about AI-generated “deepfakes” depicting someone from their school in sexually explicit situations.
15%. Let that number sink in. In a technology that has been widely shared for just under 2 years!
The article goes on to say:
Thorn, a nonprofit that monitors and combats the spread of child-sexual-abuse material (CSAM), released a report finding that 11 percent of American children ages 9 to 17 know of a peer who has used AI to generate nude images of other kids. A United Nations institute for international crime recently co-authored a report noting the use of AI-generated CSAM to groom minors and finding that, in a recent global survey of law enforcement, more than 50 percent had encountered AI-generated CSAM.
As we worry about plagiarism—another kind of “theft” is being perpetuated. Thefts of our students’ identity and dignity, something far more damaging.
As Elizabeth Laird, co-author of the CDT report, aptly puts it, generative AI tools have “increased the surface area for students to become victims and for students to become perpetrators.” This is a fundamental shift in the landscape of risks faced by our student.
In essence, we’re witnessing AI script a different curriculum – one that unfolds in the digital shadows cast by our technological advancements. This unsanctioned syllabus teaches lessons we never intended, in classrooms we never designed.
We’re not just dealing with a new form of bullying or harassment; we’re confronting a technology that can weaponize identity itself. The ease with which these images can be created and disseminated adds a new dimension to an issue we had always faced but now is ramped up to 11. Clearly, the rapid pace and commercialization of AI tools, driven by market forces, are outpacing our ethical and legal frameworks.
And of course, the detritus left behind by this technology will be left to us educators to clean up—something I spoke about in my SITE Keynote, and in this rant.
This is why, as educators and researchers, I argue that it is no longer sufficient to focus solely on integrating AI into curricula or enhancing learning outcomes. Studies that show chatbots improve engagement with mathematics content, or reduce engagement are not enough. We must expand our purview to encompass the broader ecosystem in which education occurs – an ecosystem increasingly shaped by the capabilities and risks of generative AI.
This is also an argument for looking beyond artifacts, processes and systems when thinking of designing education—we have to design for systems and culture as well (which is a not-so-sutble plug for the work we have been doing in the 5 Spaces for Design Framework) These challenges cannot addressed merely by adding another module to AI literacy.
I ended my previous post in a somewhat pessimistic vein, writing “I am both optimistic at an individual level yet deeply pessimistic about our ability (as a species) to deal with this new technology.”
There is nothing in this news that makes me change my mind.
0 Comments