A few weeks ago, I wrote a tongue-in-cheek blog post about the need for “Pencil Literacy,” defined as “A Framework for the ethical, equitable & meaningful integration of transformative graphite technology.” While that post was entertaining to write, it didn’t fully convey a deeper point about AI literacy frameworks that has been nagging at me. Hence, this post.
In particular, I’ve been troubled by how many AI literacy frameworks veer into ethical territory, a trend I believe is both misguided and potentially problematic.
But why does every conversation about AI literacy or new media literacy inevitably veer into ethics? I believe that this tendency possibly stems from several factors. First, there’s a fear of the unknown: new technologies often inspire fear, leading to a desire for ethical constraints before the technology is fully understood. Second, many who speak about AI ethics have limited understanding of the technology itself, making ethics a “safe” topic that doesn’t require deep technical knowledge. Third, there’s often a paternalistic attitude at play, with an implicit assumption that people (particularly children) need to be protected from the potential harms of the new technology. Lastly, focusing on ethics can serve as an intellectual shortcut, a way to appear thoughtful and concerned without engaging with the complexities of the technology itself.
This is nothing new. Moral panics have always accompanied the advent of new technologies and media. One striking example is the comic book scare of the 1950s. During this period, comics faced intense scrutiny and censorship due to unfounded fears about their impact on youth. Dr. Fredric Wertham’s book “Seduction of the Innocent” (1954) sparked a moral panic, claiming that comic books were a serious cause of juvenile delinquency. This led to congressional hearings and the creation of the Comics Code Authority, a self-censorship body, that mandated what kinds of comics were “appropriate.” Ironically, many of those comic book readers grew up to become influential figures in Hollywood, creating a multi-billion dollar industry, shaping the very media landscape their predecessors feared.
And it’s not just comics. This pattern of moral panic has repeated itself with various forms of media, whether television or video games or horror films.
Each of these panics followed a similar trajectory: new media emerges, fear and misunderstanding spread, calls for regulation or censorship arise, and eventually, the panic subsides as the medium becomes normalized.
Trepidation: Original design by Punya Mishra
We do not respond the same way with technologies that have become “transparent” to us. For instance, let’s contrast the examples above with how we talk about “print literacy” a retronym that emerged only after other forms of literacy (digital, media, etc.) came into existence. When we discuss print literacy, we focus on the ability to read, write, and comprehend written text. We don’t typically include discussions about the ethics of book content or publishing practices within the definition of literacy itself. We don’t include “don’t throw books at people” in our literacy curriculum. We understand that the potential for misuse doesn’t negate the skill’s value.
I am reminded of a famous story (sadly apocryphal) about literary critic Dorothy Parker’s review of Ayn Rand’s “Atlas Shrugged.” Parker quipped, “This is not a novel to be tossed aside lightly. It should be thrown with great force.” Clearly her literacy curriculum was lacking the ethics module.
While humorous, this anecdote underscores an important point: even in traditional literacy, we focus on developing skills that allow individuals to engage critically with content, rather than prescribing how that content should be used or interpreted. We trust readers to form their own opinions about books (even those they might want to “throw with great force”).
Ironically, many of the same voices championing AI literacy with a heavy emphasis on ethics are appalled by to book bans in K12 schools. Hmmm… How can we reconcile the desire to protect free expression in one medium while simultaneously advocating for constraints in another?
I guess what I am trying to say is that we must resist strong-arming the concept of “literacy” into ethical straightjackets.
Of course, I am no Pollyanna when it comes to AI. Far from it. I have written extensively about AI’s potential impact on humanity, the environmental costs of large-scale AI systems, and the biases that are baked into AI systems and more. This knowledge is of critical importance, just that it is not part of literacy. Understanding the printing press’s historical impact enriches traditional literacy but that knowledge is independent of the idea of literacy. All these issues I have listed above should be part of a comprehensive education, but distinct from what we are calling AI literacy skills. This separation allows for a more nuanced and comprehensive approach to both AI ethics and AI literacy, without one overshadowing or constraining the other.
A New Perspective on AI Literacy
As we consider the future of AI literacy, it’s worth examining a particularly insightful definition of literacy proposed by Myers in 1995.* Myers suggested that literacy is “the ability to consciously subvert signs,” implying that it goes beyond mere communication to encompass “some state of agency in which one can control, even manipulate, how signs are used.”
This definition offers a fresh and powerful lens through which we can view AI literacy. It moves us away from prescriptive ethics and towards a more nuanced, empowering conception of what it means to be literate in the age of AI. Myers’ definition is compelling for several reasons.
- First, its breadth allows it to encompass the diverse range of “signs” that AI can generate and interpret, from text and images to code and data visualizations. This flexibility is crucial in a rapidly evolving technological landscape.
- Second, it places a strong emphasis on agency. In the context of AI, this suggests that true literacy involves not just understanding or using AI systems, but actively shaping and redirecting their outputs for one’s own purposes. This aligns with our goal of empowering individuals rather than constraining them with ethical mandates.
- Third, the notion of “subverting” signs means that literacy is not just a neutral skill, but can be a tool for challenging dominant narratives. In the context of AI, this perspective encourages us to see AI systems and their outputs not as immutable, but as human constructions that can be questioned, redesigned, and repurposed. This view promotes AI literacy as a means of democratizing technology, enabling individuals to reshape AI tools to meet their own needs.
- Fourth, Myers’ definition inherently values expertise and deep understanding. To subvert a system effectively, one must first understand its rules and conventions intimately. In the realm of AI, this translates to a deep, practical knowledge of how these systems work, their capabilities, and their limitations.
- Finally, this view of literacy places creativity at its core. It suggests that true literacy involves not just comprehension, but the ability to innovate, to bend rules productively, and to create new meanings and applications. In the context of AI, this could mean using language models in unexpected ways, combining different AI tools to create novel applications, or even developing new AI systems that challenge existing paradigms.
This view of literacy pushes us to move beyond discussions of ethics and the acceptance of fear-based restrictions. Instead, we focus on creating users who are competent, creative, aware, and empowered.
*Myers, J. (1995). The value-laden assumptions of our interpretive practices. Reading Research Quarterly, 30(3), 582-587.
0 Comments