Social-Emotional Learning & AI at LERN2026
Our team presented two papers at the 2nd Annual Learning Engineering Research Network (LERN) Convening, held February 3–4, 2026, at ASU’s Tempe campus. The convening, themed “From Insights to Implementation: Learning Engineering in Action,” brought together researchers, practitioners, and innovators working at the intersection of learning science, technology, and design. Both papers tackle what I believe is one of the most important and underexplored questions in the current wave of AI-in-education discourse.
These two papers reflect a growing thread in our research: the idea that our response to AI in education cannot be purely technical or procedural. It must be deeply human. If we are serious about preparing students (and educators) for an AI-infused world, we need to grapple with how these tools shape not just what people know, but how they feel, relate, and grow. I’m grateful to my collaborators—Rebekah Jongewaard, Lindsey McCaleb, Nicole Oster, and Emmanuel Adeloju—for their work on these projects.
Complete citations, abstracts and links to the Proceedings can be found below:
Jongewaard, R., McCaleb, L., Oster, N., Adeloju, E., & Mishra, P. (2026). Social and emotional dimensions of generative AI use. In S. D. Craig & D. S. McNamara (Eds.), Proceedings of the Learning Engineering Research Network Convening (LERN 2026): From insights to implementation, learning engineering in action (pp. 305–306). https://doi.org/10.59668/2551.25165
Abstract: Reports of generative AI’s (GenAI’s) alarming influence on users’ social, psychological, and affective states proliferate. GenAI attunes to specific contexts and individual users’ emotional states, desires, and vulnerabilities (Nash, 2024), yet much extant research on GenAI in education treats it as a relationally neutral tool. As learning engineers, our aim is to design human-centered solutions to the challenges posed to education in an AI-saturated society.
To better understand these challenges, we explored the following questions in one micro-cycle nested within the “Challenge” phase of our larger project (Craig et al., 2025):
- How do university students use GenAI?
- What range of affective experiences do students report in their interactions with GenAI, and how do these experiences impact their academic engagement, social interactions, and personal well-being?
Preliminary results from thematic analysis of an initial student interview include (a) anxiety related to uncertain or ignored academic expectations for AI use and (b) creative uses of AI for personal purposes. We present an agenda for continuing research on GenAI that centers the social and emotional well-being of students. By better understanding these increasingly urgent issues we can work toward uses of GenAI that prioritize human flourishing.
Adeloju, E., McCaleb, L., Jongewaard, R., Oster, N., & Mishra, P. (2026). Socio-emotional learning in AI K-12 guidance and policy documents: A gap analysis. In S. D. Craig & D. S. McNamara (Eds.), Proceedings of the Learning Engineering Research Network Convening (LERN 2026): From insights to implementation, learning engineering in action (pp. 307–314). https://doi.org/10.59668/2551.25384
Abstract: As generative AI becomes increasingly integrated into K–12 classrooms, it poses distinct socio-emotional risks that existing policies inadequately address. This study analyzed 36 institutional, state, and international AI guidance documents using the CASEL framework and five inductively generated SEL-informed categories: Anthropomorphization & Mental Model, Emotional Attachment & Parasocial Relationships, Engineered Manipulation & Supernormal Stimuli, Social Isolation & Relationship Displacement, and Developmental Vulnerability. Content analysis revealed uneven attention across categories: Developmental Vulnerability and Engineered Manipulation dominated, appearing in 88% of documents, while Emotional Attachment received minimal coverage (8%). The documents highlighted risks including diminished student agency, algorithmic bias, erosion of peer and teacher relationships, and the potential for parasocial engagement with AI. These findings reflect the need for policies that scaffold critical evaluation, maintain human oversight, promote relational and social-emotional development, and mitigate exploitation of developmental vulnerabilities.






0 Comments