AI in education can aid
But bias and fairness must be weighed
Educators and students must co-design
To make assessments more fine
And ensure learning growth is not delayed
The above is a limerick created by AILYZE (www.ailyze.com) to summarize the recently released report from the Office of Educational Technology titled “Artificial Intelligence and the Future of Teaching and Learning.” It also created an on-demand haiku – but I do think its syllable counting ability needs updating.
AI in education
Must address bias and fairness
For learning to thrive
It also created an abstract of the key ideas in the document:
Abstract: The document extracts discuss the importance of addressing bias and fairness in AI-enhanced formative assessments in education. The use of AI can provide teachers with additional information about students’ work and support instructional adaptation. However, there are risks of algorithmic discrimination and unintended consequences, which require urgent attention. The involvement of educators and students in designing feedback loops and co-designing AI-enabled edtech is crucial. The focus should be on learning growth and gains, advancing equity, and addressing context and enhancing trust and safety. The integration of contextual, content, and technical expertise is necessary. The use of AI should not hinder innovation or be burdensome to implement, and policies should protect civil rights and promote democratic values. The goal is to strengthen and support learning for all students, especially those who have experienced unfavorable circumstances for learning.
And then a a table with key themes and descriptions of these themes:
Themes: Artificial Intelligence and the Future of Teaching and Learning
Themes | Detailed Description with Quotes and Examples |
Limitations of AI models | AI models are incomplete and have limitations in their ability to fit the reality we care about. They don’t have the same “common sense” judgment that people have, often responding in ways that are unnatural or incorrect. |
Importance of having humans in the loop | Humans being part of the process of noticing patterns in an educational system and assigning meaning to those patterns is essential. Exercising judgment and control in the use of AI systems and tools is necessary to provide the best opportunity to learn for all students. |
Need to consider context | AI models should be made more context-sensitive to ensure their effectiveness, safety, and trustworthiness for use with varied learners in diverse settings. Attention to context is necessary to express the multiple dimensions that must be considered when elaborating the phrase “for whom and under what conditions.” |
Need for trust, safety, and ethical considerations | Researchers must consider not only efficacy but also explore where harm may arise and the system problems that can occur through weak trust or over-trust in AI systems. Prioritizing challenges of context, trust, and safety in their work to advance AI is recommended. Development of education-specific guidelines and guardrails to ensure that educators can realize the opportunities of AI in edtech while minimizing risks is necessary. |
Importance of addressing the long tail of learner variability | AI has the potential to personalize learning and support diverse learners, but this requires attention to the unique needs and characteristics of individual learners. R&D funders should focus resources on the long tail of learner variability to ensure that AI-enabled systems can support all learners. |
Value of feedback loops | Feedback loops can help teachers to identify areas where students are struggling and provide targeted support to help them improve. They can also help teachers to reflect on their own practice and identify areas where they can improve their teaching. |
Limitations of AI in formative assessment | Automated essay scoring (AES) technologies can be misled by the length of an essay or inappropriate keyword placement. AI and humans may not notice the same things in student writing. Therefore, AI should support but not replace high-quality, human-led processes and practices of formative assessment in schools. |
Opportunities for AI in formative assessment | AI can help measure the broader skills students need to succeed and thrive, detect when a student is struggling, and provide teachers with additional information about students’ recent work. AI can also help to improve help-seeking and help-giving by providing students with personalized feedback and support. |
Bias and fairness in AI-enabled formative assessment | Strong and deliberate attention to bias and fairness is needed as future formative assessments are developed. Algorithmic discrimination can arise due to the manner in which AI algorithms are developed and improved from large datasets of parameters and values that may not represent all cohorts of learners. |
Potential benefits of AI in teaching | AI has the potential to improve teaching by providing personalized learning experiences, improving student engagement, and making lesson planning more efficient. |
Need for teacher training and support | Teachers need training and support to take advantage of the opportunities that AI can provide. |
Tension between human and computer decision-making | There is a tension between human and computer decision-making in the context of AI-enabled tools for teachers. |
Potential for increased surveillance | AI-enabled technologies might introduce new forms of surveillance. |
Need to balance student privacy and representation | There is a need to balance student privacy and representation in the context of AI-enabled technologies. |
Importance of cultural responsiveness | There is a need for cultural responsiveness in the design and implementation of AI-enabled technologies. |
Finally I asked it to provide a critical response to the document, identifying what it gets right, and where it falls short.
Answer: The document extracts provide valuable insights into the use of AI in education and highlight the need for policies that prioritize equity, privacy, and human involvement. However, there are also several areas where the document falls short.
Three things the document gets right are:
- The importance of addressing bias and fairness in AI assessments: The document acknowledges that traditional assessments have issues of bias and fairness, and the field of psychometrics has developed tools to address these issues. However, the document also recognizes that the issues expand with AI, and there is a need for strong and deliberate attention to bias and fairness as future formative assessments are developed.
- The need for human involvement in AI: The document emphasizes the importance of involving humans in the loop when it comes to AI in education. It argues that policies should establish human involvement as a requirement in educational applications, despite contrary pressures to use AI as an alternative to human decision making. The document also suggests involving educators in reviewing existing AI-enabled systems, tools, and data use in schools, designing new applications of AI based on teacher input, and carrying out pilot evaluations of proposed new instructional tools.
- The importance of context in AI: The document recognizes that classrooms and schools are different, and at home, students learn in informal settings in ways that could complement school learning. It recommends attention to “context” as a means for expressing the multiple dimensions that must be considered when elaborating the phrase “for whom and under what conditions.”
However, there are also several areas where the document falls short:
- Lack of concrete recommendations: While the document provides valuable insights into the use of AI in education, it lacks concrete recommendations for policymakers and educators. It suggests several areas where progress is needed, such as advancing research and development to increase fairness, accountability, transparency, and safety in AI systems used in educational settings, but it does not provide specific recommendations for achieving these goals.
- Limited discussion of data privacy: While the document acknowledges the importance of ensuring the security and privacy of student, teacher, and other human data in AI systems, it does not provide a detailed discussion of how this can be achieved. It suggests that educators can inspect edtech to determine whether and how AI is being incorporated within edtech systems, but it does not provide specific recommendations for protecting data privacy.
- Overemphasis on AI: While the document recognizes the potential benefits of AI in education, it also overemphasizes the role of AI in improving learning outcomes. It suggests that AI can tailor a model to the less common and more culturally specific contexts, but it does not acknowledge the limitations of AI in understanding and working with context in the ways people do.
- Lack of attention to digital equity: While the document acknowledges the need for policies that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems to be used across the many decentralized levels of the American educational system, it does not provide specific recommendations for advancing digital equity for all.
- Insufficient discussion of algorithmic discrimination: While the document acknowledges the risk of algorithmic discrimination, it does not provide specific recommendations for minimizing bias and promoting fairness in AI models. It suggests that developers and implementers of AI in education take strong steps to minimizing bias, but it does not provide specific recommendations for achieving this goal.
In conclusion, the document extracts provide valuable insights into the use of AI in education, but there are also several areas where the document falls short. Policymakers and educators should take these insights into account when developing policies and practices that promote equity, privacy, and human involvement in AI in education.
What do you think?
0 Comments