I recently participated in a virtual panel organized by the Federation of Post-Secondary Educators of British Columbia (FPSE), examining the intersection of AI: Human rights, and Education. The event brought together five panelists from different institutions and disciplines to discuss how artificial intelligence is changing educational settings and affecting human rights.
The panel, moderated by Dr. Aigerim Shilibekova, included presentations from Peter Lewis on trustworthy AI and AI language use, Lynn Long on developing assessment tools for educational AI applications, Frank Fernandez on surveillance and privacy issues in higher education, and. yours truly. The conversation covered topics from environmental costs of AI systems to academic integrity challenges, highlighting that AI in education requires responses from multiple disciplines beyond technology alone.
My presentation focused on what I termed the “illusions” of AI in education, examining how these technologies can mislead us in subtle ways. The illusion of understanding emerged from AI’s ability to pattern match and respond in natural language, creating a false sense of comprehension—I demonstrated this through examples where AI correctly identified optical illusions by name but couldn’t analyze what it actually “saw.” The illusion of neutrality overlooked how large language models, trained on human data, inevitably embed cultural biases and assumptions from predominantly Western, educated sources. I also discussed the illusion of expertise, explaining how AI systems are essentially “hallucination machines” that produce plausible-sounding but potentially incorrect information with unwavering confidence. Looking to longer-term implications, I raised concerns about the illusion of connection, where AI’s anthropomorphic design could hijack our social instincts, particularly problematic in education where friction and disagreement are often essential for genuine learning. Throughout, I emphasized the need for educators to maintain nuance in these discussions, moving beyond the polarized “cheerleaders versus doomsayers” narrative that often dominates AI discourse, while acknowledging that our fears of AI technology may actually be fears of the broader socio-political systems within which these tools operate.
You can watch an un-edited recording of the event by going to this link (Password: HRISC2025!) – or by following this link/embedded below.
0 Comments