AI in teaching & learning: A critical response (by AI)

by | Wednesday, May 31, 2023

AI in education can aid
But bias and fairness must be weighed
Educators and students must co-design
To make assessments more fine
And ensure learning growth is not delayed

The above is a limerick created by AILYZE (www.ailyze.com) to summarize the recently released report from the Office of Educational Technology titled “Artificial Intelligence and the Future of Teaching and Learning.” It also created an on-demand haiku – but I do think its syllable counting ability needs updating.

AI in education
Must address bias and fairness
For learning to thrive

It also created an abstract of the key ideas in the document:

Abstract: The document extracts discuss the importance of addressing bias and fairness in AI-enhanced formative assessments in education. The use of AI can provide teachers with additional information about students’ work and support instructional adaptation. However, there are risks of algorithmic discrimination and unintended consequences, which require urgent attention. The involvement of educators and students in designing feedback loops and co-designing AI-enabled edtech is crucial. The focus should be on learning growth and gains, advancing equity, and addressing context and enhancing trust and safety. The integration of contextual, content, and technical expertise is necessary. The use of AI should not hinder innovation or be burdensome to implement, and policies should protect civil rights and promote democratic values. The goal is to strengthen and support learning for all students, especially those who have experienced unfavorable circumstances for learning.

And then a a table with key themes and descriptions of these themes:

Themes: Artificial Intelligence and the Future of Teaching and Learning 

ThemesDetailed Description with Quotes and Examples
Limitations of AI modelsAI models are incomplete and have limitations in their ability to fit the reality we care about. They don’t have the same “common sense” judgment that people have, often responding in ways that are unnatural or incorrect.
Importance of having humans in the loopHumans being part of the process of noticing patterns in an educational system and assigning meaning to those patterns is essential. Exercising judgment and control in the use of AI systems and tools is necessary to provide the best opportunity to learn for all students.
Need to consider contextAI models should be made more context-sensitive to ensure their effectiveness, safety, and trustworthiness for use with varied learners in diverse settings. Attention to context is necessary to express the multiple dimensions that must be considered when elaborating the phrase “for whom and under what conditions.”
Need for trust, safety, and ethical considerationsResearchers must consider not only efficacy but also explore where harm may arise and the system problems that can occur through weak trust or over-trust in AI systems. Prioritizing challenges of context, trust, and safety in their work to advance AI is recommended. Development of education-specific guidelines and guardrails to ensure that educators can realize the opportunities of AI in edtech while minimizing risks is necessary.
Importance of addressing the long tail of learner variabilityAI has the potential to personalize learning and support diverse learners, but this requires attention to the unique needs and characteristics of individual learners. R&D funders should focus resources on the long tail of learner variability to ensure that AI-enabled systems can support all learners.
Value of feedback loopsFeedback loops can help teachers to identify areas where students are struggling and provide targeted support to help them improve. They can also help teachers to reflect on their own practice and identify areas where they can improve their teaching.
Limitations of AI in formative assessmentAutomated essay scoring (AES) technologies can be misled by the length of an essay or inappropriate keyword placement. AI and humans may not notice the same things in student writing. Therefore, AI should support but not replace high-quality, human-led processes and practices of formative assessment in schools.
Opportunities for AI in formative assessmentAI can help measure the broader skills students need to succeed and thrive, detect when a student is struggling, and provide teachers with additional information about students’ recent work. AI can also help to improve help-seeking and help-giving by providing students with personalized feedback and support.
Bias and fairness in AI-enabled formative assessmentStrong and deliberate attention to bias and fairness is needed as future formative assessments are developed. Algorithmic discrimination can arise due to the manner in which AI algorithms are developed and improved from large datasets of parameters and values that may not represent all cohorts of learners.
Potential benefits of AI in teachingAI has the potential to improve teaching by providing personalized learning experiences, improving student engagement, and making lesson planning more efficient.
Need for teacher training and supportTeachers need training and support to take advantage of the opportunities that AI can provide.
Tension between human and computer decision-makingThere is a tension between human and computer decision-making in the context of AI-enabled tools for teachers.
Potential for increased surveillanceAI-enabled technologies might introduce new forms of surveillance.
Need to balance student privacy and representationThere is a need to balance student privacy and representation in the context of AI-enabled technologies.
Importance of cultural responsivenessThere is a need for cultural responsiveness in the design and implementation of AI-enabled technologies.

Finally I asked it to provide a critical response to the document, identifying what it gets right, and where it falls short.

Answer: The document extracts provide valuable insights into the use of AI in education and highlight the need for policies that prioritize equity, privacy, and human involvement. However, there are also several areas where the document falls short.

Three things the document gets right are:

  1. The importance of addressing bias and fairness in AI assessments: The document acknowledges that traditional assessments have issues of bias and fairness, and the field of psychometrics has developed tools to address these issues. However, the document also recognizes that the issues expand with AI, and there is a need for strong and deliberate attention to bias and fairness as future formative assessments are developed.
  2. The need for human involvement in AI: The document emphasizes the importance of involving humans in the loop when it comes to AI in education. It argues that policies should establish human involvement as a requirement in educational applications, despite contrary pressures to use AI as an alternative to human decision making. The document also suggests involving educators in reviewing existing AI-enabled systems, tools, and data use in schools, designing new applications of AI based on teacher input, and carrying out pilot evaluations of proposed new instructional tools.
  3. The importance of context in AI: The document recognizes that classrooms and schools are different, and at home, students learn in informal settings in ways that could complement school learning. It recommends attention to “context” as a means for expressing the multiple dimensions that must be considered when elaborating the phrase “for whom and under what conditions.”

However, there are also several areas where the document falls short:

  1. Lack of concrete recommendations: While the document provides valuable insights into the use of AI in education, it lacks concrete recommendations for policymakers and educators. It suggests several areas where progress is needed, such as advancing research and development to increase fairness, accountability, transparency, and safety in AI systems used in educational settings, but it does not provide specific recommendations for achieving these goals.
  2. Limited discussion of data privacy: While the document acknowledges the importance of ensuring the security and privacy of student, teacher, and other human data in AI systems, it does not provide a detailed discussion of how this can be achieved. It suggests that educators can inspect edtech to determine whether and how AI is being incorporated within edtech systems, but it does not provide specific recommendations for protecting data privacy.
  3. Overemphasis on AI: While the document recognizes the potential benefits of AI in education, it also overemphasizes the role of AI in improving learning outcomes. It suggests that AI can tailor a model to the less common and more culturally specific contexts, but it does not acknowledge the limitations of AI in understanding and working with context in the ways people do.
  4. Lack of attention to digital equity: While the document acknowledges the need for policies that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems to be used across the many decentralized levels of the American educational system, it does not provide specific recommendations for advancing digital equity for all.
  5. Insufficient discussion of algorithmic discrimination: While the document acknowledges the risk of algorithmic discrimination, it does not provide specific recommendations for minimizing bias and promoting fairness in AI models. It suggests that developers and implementers of AI in education take strong steps to minimizing bias, but it does not provide specific recommendations for achieving this goal.

In conclusion, the document extracts provide valuable insights into the use of AI in education, but there are also several areas where the document falls short. Policymakers and educators should take these insights into account when developing policies and practices that promote equity, privacy, and human involvement in AI in education.

What do you think?

Topics related to this post: AI | Learning Futures | News | Teaching | Technology

A few randomly selected blog posts…

Digital convergence…

A few years ago Sachin Kalbag, then at Digit, contacted me to write an article for a special for the magazine's fourth anniversary issue. The topic at hand was Digital Convergence, what it is, and what does it mean for our future? Other authors who would be writing in...

Ed Week goes TPACK

There is an article in yesterday's Education Week (a part of their Technology Counts series) titled "Learning to Teach with Technology." I was interviewed by the author (Vaishali Honawar) a few months ago and had completely forgotten about it, till someone emailed me...

Deconstructing TV news

The video below has been getting a lot of attention on the blogs lately, and despite that it is pretty good. No kittens riding skateboards or mentos and Coke here. Just a beautifully constructed take down of TV News. A must see for all media literacy courses. Check it...

Walking away from Happy Valley

I have been haunted the past week or so with the scandal enveloping Penn State. Much as been written about it already - and I really have nothing fundamentally new to offer to this discussion. What I did want to share was a parallel that struck me recently about these...

Arthur C. Clarke, RIP

Arthur C. Clarke, popularizer of science and science fiction writer died today. He was 90. Clarke was one of my favorite authors growing up though I haven't read him in a while. I still remember the thrill I felt when I read the last sentence of "Rendezvous with Rama"...

SITE2022: San Diego

SITE2022: San Diego

I sent the past week in San Diego at the SITE 2022 conference—first face to face conference in over 2.5 years. It was great to get out meet old friends, make new ones, and just spend time together. Below are (for the record) the papers and presentations that I was...

Good Evil Ambigram

Brad Honeycutt, a fellow Spartan (he graduated 1996 a couple of years before I started here at Michigan State) is fascinated by optical illusions. He has completed a couple of books on optical illusions the first of which will be coming out in July. Scott Kim, one of...

Photos from Twente

I have uploaded a set of photos from my walk around the Twente University campus onto Flickr. You can see the entire set by clicking on the image below. Enjoy.

Finding myself in EduPunk

Matt Koehler introduce me to the idea of edupunk. As this Chronicle story (Frustrated With Corporate Course-Management Systems, Some Professors Go 'Edupunk') says, Edupunk seems to be a reaction against the rise of course-managements systems, which offer cookie-cutter...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *