GenAI in Teacher Education: A Technoskeptical Perspective

by | Sunday, February 25, 2024

Image created using Adobe Firefly & Adobe Photoshop, composed in Keynote by Punya Mishra 

By Marie K. Heath and Punya Mishra

Hello! This is a cross-blog post between Punya Mishra’s blog, where he plays with ideas of learning, technology, design and creativity (and whatever else catches his fancy) and the Civics of Technology blog, a place where folks across the Civics of Technology community examine the intersections of technology, society, and education. So first, a welcome! Especially if you are new to one of these spaces! And second, a short introduction across the blogs:

Hello! Marie, here! I am not a robot, but I refuse to prove this fact to Google’s CAPTCHA. I am also the co-founder of the Civics of Technology project, along with fellow human, Dr. Dan Krutka. I am a former high school social studies teacher who now works as an assistant professor of Learning Design and Technology at Loyola University Maryland. My work across educational spaces labors to advance more just technological and educational futures. I like to say that if you ask generative AI to write my bio, it responds with the Mariah Carey “I don’t know her” meme.

It is always a treat for me to share time, space, and thinking with Punya, who I am happy to have as a colleague and a dear friend. His freewheeling and playful style of thinking and being in the world is always provoking me to new and different ways of thinking. My thanks to Punya for this cross-blog posting. 

Hello, this is Punya, and likewise a pleasure to share the space-time continuum with Marie. I am never sure what we will talk about next but our conversations are full ideas and most importantly laughter. That said, it is somewhat challenging to follow up that intro to herself by Marie, so I will let AI do the job. Here is the first paragraph of  how Bing Chat (in creative mode) described me (complete version here). 

Hello, I’m Punya Mishra, and I have a serious identity crisis. I don’t know if I’m an associate dean, a professor, a researcher, an author, or a designer. I work at the Mary Lou Fulton Teachers College at Arizona State University, where I try to do everything though the end results are never entirely clear. I’m interested in education, technology, creativity, and design, but don’t ask me to define these terms—because you know, “it’s complicated.”


Recently, the two of us have been spending time thinking about the ways generative AI may help and hinder education and teacher preparation. Both of us come at this topic through a historical and socio-cultural lens, where we try to better understand the nature of a technology or medium and the impact of the broader socio-economic-technological complex on how the technology plays out in our world. For instance, Punya dug into these ideas in a recent blog post titled Media, Cognition & Society through History and another titled The Postman Always Rings Twice: Unpacking McLuhan. Marie, along with Dan Krutka and Jacob Pleasants, have played with these ideas in their post Provocations on Technoskpeticism and in their related research.

It is not surprising, therefore, that we have spent quite a bit of time using these approaches to try to explore how AI may affect society and how education (and educators) will have to respond. For instance see a recent piece by Punya on Generative AI, Teacher Knowledge and Educational Research: Briding Short- and Long-Term Perspectives. Marie has focused on applying these ideas in her writing that asked technoskeptical questions about generative AI as well as collaborating on developing Kapor Center’s Responsible AI and Tech Justice: A Guide for K-12 Education

Marie & Punya go to Washington, DC

In September, 2023, David Slykhuis invited us to facilitate a two-day strand around this topic at the National Technology Leadership Summit (NTLS) in Washington, DC. The aim of the strand was to work with other teacher educators to develop a set of questions for teacher education to use for inquiring into policy, practice, and research around generative AI. A full list of the thoughtful scholars, policy leaders, and practitioners involved in the project may be found on page 8 of our final report

Heath, M., & Mishra, P. (2023). Generative AI: Possibilities, Promises, Perils, Practices, and Policy. National Technology Leadership Summit meeting, September 14-15, Washington, DC

In order to explore the obvious and hidden impacts of generative AI, we reviewed diverse literature on: the design and impacts of generative AI (Bender et al., 2021), on education (Berkshire & Schneider, 2023; Heath & Krutka, 2023; Mishra et al., 2023; Trust, 2023; Williamson, 2023), indigeneity (Hendrix, 2023; Marx, 2023), the environment (Bender et al., 2021), and the social (Bender et al., 2021; Williamson, 2023).

Next, the entire group applied technoskeptical (Krutka et al., 2022) and practice based questions (Mary Lou Fulton Teachers College, 2023) to identify gaps in theory, positionality, and approaches to AI in education. Finally, the strand participants identified 5 broad themes, and 2-3 attendant reflective questions to engage with across each theme, when considering if, when, and how to use generative AI in teacher education.

The 5 themes which we identified are: Truth/Verisimilitude; Equity and Justice; Professional Works, Mindsets, and Skills; Broader Context of Teacher Preparation Programs; and Teaching About Generative AI and Its Impact on Society. We are sharing our summary of each theme, as well as the questions, below. You may also download an easy to read and sharable copy of the report on the NTLS website.

As you read through, we are curious – what topics are we missing? Which questions have we overlooked? How has your school thought about generative AI within these contexts or others? Should we ask the smart, drunk, intern of ChatGPT what it thinks of these topics and ideas? Please feel free to comment on the blog or reach out to us through our contact pages (Marie’s or by dropping a comment on Punya’s blog or emailing him directly). Looking forward to continuing the conversation!

Theme 1: Truth/Verisimilitude

Overview: The advent of generative AI, with its ability to craft realistic-looking synthetic media that can easily be mistaken for reality, poses profound challenges for society at large, and thus becomes relevant for educators. These technologies bring with them the potential for widespread misinformation, as they can manipulate narratives, prioritize certain perspectives over others, and even reshape our collective understanding of what we deem as truth, potentially altering our very perceptions of reality. Added to this is the complex interplay of socio-economic factors, like wealth and power which can influence the presentation and acceptance of algorithmically produced ‘truths,” amplifying some narratives as “truth” while attempting to further marginalize others and exacerbating existing schisms and inequities in our world. This evolution in AI technology demands a heightened awareness and critical approach from educator preparation programs to prepare the next generation of educators.

Reflective Questions

  1. Does our curriculum/program help educators develop a critical understanding of the nature of Generative AI and its ability to blur the lines between truth and falsehood, subjective and objective truth? 
    1. How does our curriculum address the technological intricacies of Generative AI that allow it to simulate reality?
    2. Are educators introduced to discussions on the philosophical implications of AI-generated truths versus human-derived truths?
  2. Does our curriculum/program adequately prepare teachers to address the potential spread of misinformation, and its impact on democratic society, given the ease of creating realistic synthetic content?
    1. How are teachers trained to identify and debunk AI-generated misinformation in their classrooms?
    2. Is there a component in the curriculum that delves into the broader societal consequences of unchecked synthetic content on democratic processes?

Theme 2: Equity and Justice

Overview: While many technology companies and the powerful individuals who run them (including Musk, Wozniak, and others) have speculated about future harms of generative AI on humans (Future of Life Institute, 2023), algorithmic harms already exist in our present. Unlike the science fiction dystopia presented by the open letter signed by Musk and other tech leaders, artificial intelligence, or algorithmic models, currently cause material harm to people pushed to the margins of society. Black feminist and queer scholars have called attention to the algorithmic injustice embedded within the AI models and their damaging impact on marginalized people (Benjamin, 2020; Costanza-Chock, 2020; Noble, 2018; O’Neil, 2017). For instance, algorithms used in healthcare settings to determine medical interventions undercalculate the pain Black women report, resulting in underdiagnosis and increased death compared to their white counterparts (Benjamin, 2019). Despite a hope that AI would diminish or eliminate bias in mortgage lending, AI reproduces housing and lending inequities, prompting lenders to reject a higher percentage of loans and charge more interest to Black and Latinx applicants than their white counterparts (Bartlett et al., 2022). Because of racism encoded in algorithmic learning, Black people have been falsely arrested on the basis of poor facial recognition (e.g. Robert Williams arrest by Detroit police; Kentayya, 2020; and Porcha Woodruff’s false arrest by Detroit police; Cho, 2023). Trans people boarding planes are forced to walk through body scanning systems which do not recognize their bodies, resulting in increased and invasive body searches (Costanza-Chock, 2020). These are not potential harms of AI, they are existing harms that have been occurring for years, despite the attention called to them by activists and scholars. Similar biases have been seen in the use of ChatGPT in educational contexts as well (Warr, Oster, & Isaac, 2023). 

The rapid technological changes of generative AI, coupled with a hasty implementation in education, may result in direct harms to already marginalized and minoritized students. How can we work toward just uses of generative AI in education?

Reflective Questions:

  1. How does generative AI currently and potentially intersect with systems of power in education?
    1. How are we preparing teachers to critically examine marginalized and minoritized people’s lived experiences with generative AI?
    2. How are we preparing teachers to identify systems of oppression which may be amplified by using generative AI?
  2. How does our TPP consider what facilitates and prevents access to generative AI in educational spaces?
    1. Are there tiered systems for access (free and paid)?
    2. What other technology and resources are needed to access generative AI models?
  3. How do we evaluate whether generative AI is ethically designed for education use?
    1. How is data collected and stored?
    2. What is the aim of generative AI?
  4. Does our curriculum/program emphasize the development of critical thinking skills to interrogate whose perspective and narratives are being prioritized or marginalized by AI-generated content? 
  1. How does our curriculum guide educators in recognizing and understanding potential biases embedded within AI tools and outputs?
  2. Are there discussions and exercises aimed at understanding the power dynamics at play when algorithms decide which narratives to prioritize?

Theme 3: Professional Works, Mindsets, Tasks and Skills

Overview: Generative AI is a protean and multidimensional technology, with a wide range of capabilities to produce complex, unique outputs across a range of domains (from programming to visual art, from poetry to science and more). That said, it works well only when steered by a knowledgeable human who brings their expertise both of domains and of working with AI to the table. These capabilities can be an immense boon to educators, allowing them to develop creative curricula and assessments and for their students to utilize them in creative ways to support their own learning (Henriksen, Woo & Mishra, 2023). These tools offer quicker curriculum adjustments and the development of innovative pedagogical approaches and assessment techniques. 

The importance of the human in learning cannot be overstated. Clarifying — for both themselves and for the profession — what learning means and what an educator’s role is in the learning process, will be crucial for articulating when and how to use generative AI in education.  Educators need to critically engage with AI-generated materials, discerning their relevance and application (Close, Warr, & Mishra, 2023). Educators need to develop a creative mindset that allows them to explore, play and understand the possibilities and challenges of bringing these technologies to educational contexts (Warr, Mishra, Henriksen, & Woo, 2023). They also need to be alert to the unintended consequence of AI in education, in particular a further deprofessionalization of the profession. Technology companies and financially strapped districts may point to the “efficiencies” of technology which they may argue — as they have with older technologies — that it is an equal substitute for teaching.

Reflective Questions

  1. Does our curriculum/program address how GenAI potentially reshapes the teaching profession and educators’ roles?
    • Does our curriculum/program offer strategies for educators to maintain their agency in the face of GenAI advancements?
    • Does our curriculum/program stress the importance of soft skills and human values to maintain human-centric pedagogies in the face of AI integration?
  2. Does our curriculum/program equip educators to effectively integrate and critically evaluate GenAI-generated content?
    • Does our curriculum/program train teacher candidates to critically assess GenAI outputs for specific disciplines and educational contexts?
    • Does our curriculum/program impart essential skills or knowledge for educators to adapt and revise GenAI-generated content?
  3. Does our curriculum/program provide opportunities for educators to learn how to best work with GenAI in to develop and enact curricular goals?
    • Does our curriculum/program explore how rapid prototyping with GenAI might lead to creative and innovative pedagogical strategies?
    • Does our curriculum/program guide educators developing new forms of assessment that truly get at student learning and cannot be subverted by generative AI?

Theme 4: Broader context of Teacher Preparation Programs 

Overview: The integration of General Artificial Intelligence (GAI) into teacher preparation programs presents a transformative shift, influencing not only admissions and evaluation processes but also the transparency and support systems essential for pre-service teachers and instructors. As GAI technologies evolve and potentially become as commonplace as smartphones within the next five years, it’s crucial to anticipate and strategically plan for their implications in teacher education. Key considerations include the impact of GAI on the admission of teacher recruits. This encompasses how AI might alter existing barriers, potentially streamlining the process or inadvertently creating new hurdles, particularly for diverse candidates. Furthermore, the role of AI in the ongoing evaluation of teachers during their training period is a vital aspect, raising questions about the fairness and inclusivity of such assessments.

Equity in AI-driven assessment systems is a paramount concern, particularly in addressing challenges related to language diversity, accents, multilingualism, and disabilities. It’s essential to consider whether AI can effectively identify relevant capacities and dispositions in teacher candidates without reinforcing existing biases or inequities. Transparency in the deployment of these AI systems is critical, ensuring that all participants understand how their performance is being assessed and the basis of the feedback provided. Additionally, exploring key technological points of entry that allow for the integration of AI into the education system will be crucial in managing its impact on teacher training. This includes considering what data the AI requires and ensuring that the machine learning algorithms and advisory paths do not perpetuate structural inequities within teacher education and the broader landscape of higher education.

Reflective Questions

  1. How do we create equity in AI driven assessment systems?
    1. Will AI be used to admit students to the program? How might AI increase barriers for admission and how might it reduce unnecessary barriers to admission?
    2. Will pre-service teachers be given feedback by AI systems? How will we ensure equity for all, including multilingual students, disabled students, and other students who may potentially be marginalized by the use of AI?
  2. Will we use AI to advise students throughout their programmatic experience?
    1. What data will the AI need?
    2. How do we ensure that the machine learning and advisory path devised by the AI will not reproduce structural inequities within teacher education and higher education?

Theme 5: Teaching About Generative AI and Its Impacts on Society

Overview: Not only will educators need to consider if and how they will incorporate generative AI in their teaching, they will also need to prepare students to live in a world shaped by generative AI (Richardson, Oster, Henriksen, & Mishra, 2023). As young citizens engage with technologies in their daily lives, children deserve a curriculum that allows them to think about the impact of technology on themselves and their world. Technologies themselves can and should be contested, subject to reconstruction and democratic participation (Feenberg, 1991). Teachers can help students transition perspectives from passive users to active citizens who make informed decisions and take action for more just communities. The curriculum of generative AI which teacher preparation programs might consider implementing includes teaching with, about, and against technologies (Yadav & Lachey, 2022) and towards a civics of technology (Krutka & Heath, 2022) which helps students examine the force technology exerts on society and the ways that technologies extend biases of society.

Reflective Questions

  1. Does your program equip teachers to address what K-12 students will need to be able to know and do in order to live within a world with pervasive LLMs?
    1. What do students need to know about the specific technological workings of generativeAI in order to make informed decisions about its presence and use in their lives?
    2. What do students need to know about the ways that generativeAI intersects with and amplifies societal biases?
    3. What might constitute ethical uses of AI in students’ daily, educational, and social lives?
  2. Which disciplines and grades can include standards to build knowledge and skills about the social and ethical impacts of generative AI?
    1. How can each of the disciplines bring their disciplinary lenses to a holistic understanding of AI in society?
    2. What are age and developmentally appropriate ways to teach about, with, and against AI?
  3. How can teacher preparation programs build these standards into their programmatic curricula?
    1. What is an iterative process for incorporating this content into courses in teacher preparation programs?

References

Bartlett, R., Morse, A., Stanton, R., & Wallace, N. (2022). Consumer-lending discrimination in  the FinTech era. Journal of Financial Economics, 143(1), 30-56. 

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big??. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Benjamin, R. (2019). Assessing risk, automating racism. Science, 366(6464), 421-422

Benjamin, R. (2020). Race after technology: Abolitionist tools for the new Jim code. Wiley.

Berkshire, J. & Schneider, J. (Hosts). (2023, August 8). AI Is Going to Upend Public Education. Or Maybe Not. With Larry Cuban . In Have You Heard.

Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. The MIT Press. 

Cho, K. (2023, August 7). Woman sues Detroit after facial recognition mistakes her for crime suspect. Washington Post. https://www.washingtonpost.com/nation/2023/08/07/michigan-porcha-woodruff-arrest-facial-recognition/ 

Close, K., Warr, M., & Mishra, P. (2023). The Ethical Consequences, Contestations, and Possibilities of Designs in Educational Systems. TechTrends. https://doi.org/10.1007/s11528-023-00900-7

Feenberg, A. (1991). Critical theory of technology (Vol. 5). Oxford University Press.

Future of Life Institute. (2023, March 22). Pause giant AI experiments: An open letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ 

Heath, M.K. & Krutka, D.G. (2023, April 23). Collectively Asking Techno Skeptical Questions About ChatGPT. Civics of Technology Blog.

Hendrix, J. (Host). (2023, January 29). An Indigenous Perspective on Generative AI with Michael Running Wolf . In Tech Policy Press, The Sunday Show.

Henriksen, D., Woo, L. & Mishra, P. (2023). Creative Uses of ChatGPT for Education: a Conversation with Ethan Mollick. TechTrends. DOI: https://doi.org/10.1007/s11528-023-00862-w

Kentayya, S. (Director). (2020). Coded Bias. Netflix.

Krutka, D. G., Heath, M. K., & Smits, R. M. (2022). Toward a civics of technology. Journal of Technology and Teacher Education, 30(2), 229-237.

Mary Lou Fulton Teachers College. (2023, DATE). 15 questions every college professor should be asking about ChatGPT and other generative AI: Reflective slidedeck. https://docs.google.com/presentation/d/1Yb6m_xJnx7hgzRronZ_gjjspHGv16jTZpYCqkNBSQPY/edit#slide=id.g27dd2c7880a_0_6022

Marx, P. (Host). (2023, July 20). Big Tech Won’t Revitalize Indigenous Languages with Keoni Mahelona . In Tech Won’t Save Us.

Mishra, P, Warr, M, & Islam, R. (2023): TPACK in the age of ChatGPT and Generative AI. Journal of Digital Learning in Teacher Education, DOI: 10.1080/21532974.2023.2247480

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

O’Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.

Richardson, C., Oster, N., Henriksen, D., & Mishra, P. (2023). Artificial Intelligence, Responsible Innovation, and the Future of Humanity with Andrew Maynard. TechTrends https://doi.org/10.1007/s11528-023-00921-2

Trust, T. (2023, April). ChatGPT and Education [Google Slides].

Warr, M. and Oster, N., & Isaac, R. (2023). Implicit Bias in Large Language Models: Experimental Proof and Implications for Education. Available at SSRN: https://ssrn.com/abstract=4625078 or http://dx.doi.org/10.2139/ssrn.4625078

Warr, M., Mishra, P., Henriksen, D., & Woo, L. J. (2023b). A chat about GPT3 (and other forms of alien intelligence) with Chris Dede. TechTrends. DOI: https://doi.org/10.1007/s11528-023-00843-z

Williamson, B. (2023). The Social life of AI in Education. International Journal of Artificial Intelligence in Education, 1-8.

Yadav, A., & Lachney, M. (2022). Teaching with, about, and through technology: Visions for the future of teacher education. Journal of Technology and Teacher Education, 30(2), 189-200.

A few randomly selected blog posts…

The hitchhiker’s guide to online doctoral programs, SITE2013

We finished our second symposium yesterday. Titled the The Hitchhiker’s Guide to Hybrid and Online Doctoral Programs the symposium included presentations by faculty members from Michigan State University and University of North Texas. Ann Thompson was our...

Sliding into 2018

Sliding into 2018

Over the years our family has developed a mini-tradition of creating short videos to celebrate the new year. These videos are short, always typographical, and usually incorporate some kind of a visual illusion. Our craft has improved over the years, something that can...

The mysterious pentagon… explained?

Around 2 weeks ago I posted a note about a "pentagon" I saw in some boiling lentils in my kitchen. There have been some interesting responses to this... but before I get to that, here is the original image, if you missed the original posting: Interestingly enough, a...

Math-Po (Mathematical Poetry): Goldbach’s Conjecture

My previous post (Poetry, Science & Math, OR why I love the web) mentioned a challenge by Sue VanHattum of "Math Mama Writes" to "write a little kids’ poem ... and that tells of the beauty of math, or, that mentions math and challenge, both in a positive way."...

Photoshopping in the cloud

Cloud computing maybe the next big thing. Google Apps and Chrome, gmail and flickr, YouTube and Yahoo Groups, I am moving more and more of what I do online. Even this blog in some way is an example of how I archive my work on the net. And today I discovered Pixlr....

Coding + Aesthetics: New Journal Article

Coding + Aesthetics: New Journal Article

Does beauty have a role to play in learning to code? Can code aspire to beauty and elegance? In this article, we argue that it does and it should. Read on... Good, J., Keenan, S. & Mishra, P. (2016). Education:=Coding+Aesthetics; Aesthetic Understanding,...

Creativity as Resistance: New article

Creativity as Resistance: New article

Image credit: tshirtgifter.com The next article in our series (Rethinking technology and creativity for the 21st century) for the journal Tech Trends is now available online. This article has an interview with Dr. Shakuntala Banaji, currently Associate Professor and...

Walking in a straight line

Determining the shape of the earth is something I have written about previously. For instance, see this post on seeing the shape of the earth using eclipses. (A somewhat similar effect could be seen in my photo of the moon during a lunar eclipse). On the web, I found...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *