From Surveillance to Support: Building Student Trust in the Era of AI

by | Monday, June 03, 2024

Note: This post originates from collaboration and discussions between Melissa, Punya, and Nicole. However, it is written from Nicole’s point of view as a current student, reflecting our efforts to explore student perspectives when considering the integration of AI in education. 

Newsweek recently reported how a former educator and curriculum designer devised a strategy for “catching” students using ChatGPT for essays by embedding a hidden sentence in the essay prompt. Some would argue that this tactic will help hold students accountable for academic integrity or support student conversations about these matters. 

On the other hand, this can be seen as an ongoing cat-and-mouse game that at heart reflects sense of distrust that educators have in their students. In this way, this story exemplifies the punitive nature of an educational culture marked by student surveillance. 

For the past two years, educators and institutions have discussed, ad nauseam, the ethical use of genAI and debated whether it undermines academic integrity. However, I am sometimes frustrated by the often one-sided, maybe even hypocritical, nature of this discussion. At one level, we have workshops and seminars, futurists and early adopters arguing for the efficiency benefits that teachers will receive from using these tools for lesson planning, grading, sending emails and more. At the same time, we have a counter-push that condemns student use of genAI to complete classwork more efficiently.  This attitude is even embedded into some tools, such as the Harmonize discussion software that offers “AI prompt and rubric generation” for the instructor and “AI Detection” for the student.

I have begun to wonder why students’ ethical use of genAI dominates these conversations. As a student, I can attest that assuming that students are just out to cheat can feel demeaning and hurtful. 

Instead of wallowing in this debate, it made more sense for me to reflect on my own ethical compass and decision-making process about using generative AI. I am sensitive to the fact that my choices may not be yours, but I do think that this form of critical AI reflection (as mentioned in this post by Melissa) is the way forward.

What uses of genAI feel ethically good? 

  • Refining an idea: eg. engaging in a back-and-forth dialogue with GenAI to improve, build on, or challenge an idea, attaching relevant resources when appropriate 
  • Trying to write something standardized … trying to do “what works” … to build on what has been done before: eg. revising a draft to sound more academic or more tailored to a specific domain or framework   
  • Brainstorming with genAI as a partner: eg. exploring and generating new possibilities through carefully designed prompts with parameters specific to my goals 
  • Reclaiming my time when under a lot of pressure to protect my mental health and wellbeing: eg. using genAI to help me summarize articles (before fact checking these summaries) for my own learning  
  • Easing my anxiety around sending emails that need to just get sent: eg. typing out the key ideas of an email then using genAI to polish it and improve its etiquette, or simply asking geAI if it looks good before sending 

What doesn’t feel ethically good? 

  • Using it for analysis before I have done my own analysis (in my view, this steals the opportunity to struggle and learn for myself)
  • Using it on a task that would be better if it were in my unique, context-specific, humanly flawed voice (I see this as robbing others of my authentic contributions) 
  • Endeavors that are meant to be forms of self-expression (I view this as stealing the joy of creativity from myself)

What does this mean for my writing? 

  • Refocus human-generated writing on writing that is analytical, carries a unique voice, and is a form of self-expression. 

Obviously, this one reflection should not be used to create broad institutional policies. Rather, I share it as an example of a formative individual reflective process. 

What if… 

…we encourage students to likewise engage in developing their own codes and implications for ethical AI use? 

…we discussed these personal codes and their societal implications in community with one another? 

…we took the necessary steps to promote transparency and disclosure of genAI use among our student body by setting aside judgment and distrust? 

…we took long-awaited steps to dismantle educational hierarchies of power?

This suggestion doesn’t mean a genAI free-for-all. Rather, I offer this idea as one possible path for holding space for students to explore these ideas for ourselves. Of course, educators and institutions will still need to hold students accountable, but we must also cultivate relationships of trust. We must hold ourselves as educators and institutions accountable for truly, deeply, and critically supporting the learning of the remarkable humans we call our students. 

By reflecting together, we can explore what we value in education, how we want to learn, and why we use AI as educators and learners.

A few randomly selected blog posts…

GenAI 2023: Year in Review

GenAI 2023: Year in Review

A week or so ago I was joined by friends Sean Leahy, Rachna Mathur and Kellie Kreiser on the Learning Futures Podcast. The topic: looking back on a crazy, dynamic, transformative year of generative AI. As is to be expected, we covered a lot of ground in the...

Ambi-poetry: A mathematician reinterprets ambigrams

My friend Gaurav Bhatnagar (I had blogged about his new book, Get Smart: Math Concepts here), for some reason, known only to him, has decided to create a poetry-blog based around my ambigrams. Each posting consists of one ambigram (taken from my large collection of...

Help me, find a story by Ursula Le Guin

Help me, find a story by Ursula Le Guin

 I am looking for a short story by Ursula Le Guin that I read many years ago growing up in India. The story has stayed with me but I cannot find it, despite many deep dives into the internet. I have posted on reddit, on the Ursula Le...

Shreya’s blog, new Sci-Po’s

Shreya, my daughter has a blog, Uniquely Mine. An RSS feed from her blog can be found right here (just scroll down and see the right column). Anyway, over the past few weeks she has been doing something for extra credit for the science class. Her fifth-grade teacher...

The existence of futility

I have written about the value of seeing humor in the futility of existence (see this and this) but humor can also be found in the existence of futility. Below is a motivational video that demonstrates this fact. Enjoy......

Natural v.s. Artificial Intelligence in Teaching

The field of educational technology is littered by attempts to replace the teacher by creating some kind of a technological learning system that would make the teacher redundant. All such attempts have failed. This has, however, not prevented people from trying. This...

Teacher Knowledge in the age of ChatGPT and Generative AI

Teacher Knowledge in the age of ChatGPT and Generative AI

Update March 2024: This paper received the JDLTE Outstanding Research Paper Award recognizing "the single article from the prior volume year with the highest possibility to advance the field of teacher education, based on the criteria of potential impact and...

STEM Futures at AAAS

STEM Futures at AAAS

ASU recently hosted, what is known as, the world's largest scientific gathering, the annual conference of the American Association of the Advancement of Science. As as part of this conference I was invited, along with Ariel Anbar and Trina Davis, to talk about our...

@ Purdue

Sending this note from Purdue University where I am visiting the School of Engineering Education. Had a great dinner last night with Karl Smith (whom I am catching up with after a couple of years) and Johannes Strobel. Karl picked me up this morning and drove me over...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *