From Surveillance to Support: Building Student Trust in the Era of AI

by | Monday, June 03, 2024

Note: This post originates from collaboration and discussions between Melissa, Punya, and Nicole. However, it is written from Nicole’s point of view as a current student, reflecting our efforts to explore student perspectives when considering the integration of AI in education. 

Newsweek recently reported how a former educator and curriculum designer devised a strategy for “catching” students using ChatGPT for essays by embedding a hidden sentence in the essay prompt. Some would argue that this tactic will help hold students accountable for academic integrity or support student conversations about these matters. 

On the other hand, this can be seen as an ongoing cat-and-mouse game that at heart reflects sense of distrust that educators have in their students. In this way, this story exemplifies the punitive nature of an educational culture marked by student surveillance. 

For the past two years, educators and institutions have discussed, ad nauseam, the ethical use of genAI and debated whether it undermines academic integrity. However, I am sometimes frustrated by the often one-sided, maybe even hypocritical, nature of this discussion. At one level, we have workshops and seminars, futurists and early adopters arguing for the efficiency benefits that teachers will receive from using these tools for lesson planning, grading, sending emails and more. At the same time, we have a counter-push that condemns student use of genAI to complete classwork more efficiently.  This attitude is even embedded into some tools, such as the Harmonize discussion software that offers “AI prompt and rubric generation” for the instructor and “AI Detection” for the student.

I have begun to wonder why students’ ethical use of genAI dominates these conversations. As a student, I can attest that assuming that students are just out to cheat can feel demeaning and hurtful. 

Instead of wallowing in this debate, it made more sense for me to reflect on my own ethical compass and decision-making process about using generative AI. I am sensitive to the fact that my choices may not be yours, but I do think that this form of critical AI reflection (as mentioned in this post by Melissa) is the way forward.

What uses of genAI feel ethically good? 

  • Refining an idea: eg. engaging in a back-and-forth dialogue with GenAI to improve, build on, or challenge an idea, attaching relevant resources when appropriate 
  • Trying to write something standardized … trying to do “what works” … to build on what has been done before: eg. revising a draft to sound more academic or more tailored to a specific domain or framework   
  • Brainstorming with genAI as a partner: eg. exploring and generating new possibilities through carefully designed prompts with parameters specific to my goals 
  • Reclaiming my time when under a lot of pressure to protect my mental health and wellbeing: eg. using genAI to help me summarize articles (before fact checking these summaries) for my own learning  
  • Easing my anxiety around sending emails that need to just get sent: eg. typing out the key ideas of an email then using genAI to polish it and improve its etiquette, or simply asking geAI if it looks good before sending 

What doesn’t feel ethically good? 

  • Using it for analysis before I have done my own analysis (in my view, this steals the opportunity to struggle and learn for myself)
  • Using it on a task that would be better if it were in my unique, context-specific, humanly flawed voice (I see this as robbing others of my authentic contributions) 
  • Endeavors that are meant to be forms of self-expression (I view this as stealing the joy of creativity from myself)

What does this mean for my writing? 

  • Refocus human-generated writing on writing that is analytical, carries a unique voice, and is a form of self-expression. 

Obviously, this one reflection should not be used to create broad institutional policies. Rather, I share it as an example of a formative individual reflective process. 

What if… 

…we encourage students to likewise engage in developing their own codes and implications for ethical AI use? 

…we discussed these personal codes and their societal implications in community with one another? 

…we took the necessary steps to promote transparency and disclosure of genAI use among our student body by setting aside judgment and distrust? 

…we took long-awaited steps to dismantle educational hierarchies of power?

This suggestion doesn’t mean a genAI free-for-all. Rather, I offer this idea as one possible path for holding space for students to explore these ideas for ourselves. Of course, educators and institutions will still need to hold students accountable, but we must also cultivate relationships of trust. We must hold ourselves as educators and institutions accountable for truly, deeply, and critically supporting the learning of the remarkable humans we call our students. 

By reflecting together, we can explore what we value in education, how we want to learn, and why we use AI as educators and learners.

A few randomly selected blog posts…

Representing tensions through photography

Education is always about leadership and leadership has always been about tensions—navigating through them and seeking to find the right balance between them.  Leaders often feel a tug from individuals with conflicting interests or needs, with ideas that often tug in...

Going crazy with i-Image

In a previous post I had described David Wong's ideasarecool.com website and his idea of making i-Images. As David describes them, i-Images are "professional, provocative images that seize the viewer's attention and, more importantly, spark their imagination." Anyway,...

Pragmatic yet hopeful: Talking creativity with Barbara Kerr

Pragmatic yet hopeful: Talking creativity with Barbara Kerr

Dr. Barbara Kerr is Distinguished Professor of Counseling Psychology, and is co-director of the Center for Creativity and Entrepreneurship in Education at the University of Kansas. She utilizes innovative counseling and therapy approaches to better understand the...

TPACK survey, new journal article

Hot off the press: Schmidt, D. A., Baran, E., Thompson, A. D.,  Mishra, P.,  Koehler, M.J. & Shin, T. S. (2010). Technological Pedagogical Content Knowledge (TPACK): The development and validation of an assessment instrument for preservice teachers. Journal of...

Talk at Fulton School of Engineering

Talk at Fulton School of Engineering

Last August I was invited to speak at an event organized by the Ira Fulton School of Engineering's Learning and Teaching Hub. For some reason I had not posted about it — so better late than never... here it is, a 30 min talk followed by QnA....

Creativity…

There is an absolutely dull and pointless story in today's NYTimes on creativity. Though it is titled Eureka! It Really Takes Years of Hard Work, this story clearly did not take much time to write. I agree not all articles in the Times are (or need to be) hard news......

TE150 & the hope of audacity

Matt Koehler and I were asked to create an audio introduction to TE150 for the ATT and MSU award ceremony, and website. It is amazing what three people can do in a couple of hours, given a microphone and Audacity (the open source audio editing software). Check it out...

India Breakfast, a photo report

The India themed breakfast at the College of Education, a kick-off for India Week, was a great success. [Here is a previous blog entry announcing this (and other) events.] I would like to take this opportunity to thank all the people who helped out, and also provide...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *