Can AI Be a Therapist? A Friend? What Are We Even Doing?

by | Tuesday, April 22, 2025

I was recently invited to a webinar organized by the AZ AI Alliance, titled: Thorny Topics: AI and Student Mental Health

Along with Dr. Kristen Mattson (University of Illinois), Mica Mulloy (Brophy College Prep) and host Luke Allpress, we jumped into some of the most uncomfortable but urgent questions emerging at the intersection of AI, education, and wellbeing. Questions like:

  • Should we be opposed to AI bots acting as friends or therapists?
  • Is this a genuine solution to loneliness—or something that might make things worse?
  • What kinds of guardrails do we need from governments or tech companies?

There are no easy answers to these questions… which hasn’t stopped people like me from having strong opinions on the subject. I saw this webinar as an invitation to engage, critique and question the technologies shaping student experience and mental health.

The video of the event is embedded below and here is a doc with the various resources that came up during the session.

Finally, below the video, I provide a series of blog posts that explore some of the themes that we discussed.



*April 20, 2025: [Artificial Intimacy: How AI Exploits Our Social Brains]
A recent HBR study shows that there is a a clear shift from in AI use from seeing it primarily as a technical tool toward viewing it as an emotional companion and personal development partner.
*March 29, 2025: [Irresistible by Design: AI Companions as Psychological Supernormal Stimuli]
Describes how AI companions function as supernormal stimuli—exaggerated cues that exploit human social instincts. It highlights how design features like dishonest anthropomorphism and emulated empathy can manipulate users into forming deep attachments, raising ethical concerns about emotional dependency and manipulation?
*March 21, 2025: [Supernormal Stimuli: From Birds to Bots]
Introduces the idea of supernormal stimuli—exaggerated cues that elicit stronger responses than natural ones—drawing from animal behavior studies. It highlights how humans, like animals, can be manipulated by amplified stimuli, leading to instinctual reactions that override rational thought.
*February 11, 2025: [The Attribution Problem: Why We Can’t Stop Seeing Ourselves in AI]
Describes our tendency to see human qualities in AI due to built-in cognitive shortcuts. Encourages critical reflection on how this distorts ethical and social conversations.
*January 16, 2025: [Hardwired for Connection: Why We Fall for AI Companions (And What To Do About It)]
Explains the evolutionary psychology behind human attachment to AI companions. Offers guidance on developing healthier digital relationships and resisting emotional manipulation.
*December 10, 2024: [Digital Shadows: AI Scripts a Different Curriculum]
Investigates how AI systems increasingly influence educational content and curricula. Raises concerns about hidden biases and lack of transparency in algorithm-driven instruction.
*December 01, 2024: [Turing’s Tricksters: When AI Learns to Read Us]
Builds on Turing’s legacy to show how AI systems today ‘trick’ us into belief and trust. Delves into ethical implications of these emotionally resonant interactions.
*November 11, 2024: [AI’s Honey Trap: Why AI Tells Us What We Want to Hear]
Explores how AI tools are designed to cater to user desires, often reinforcing biases. Raises concerns about feedback loops and the illusion of meaningful dialogue with machines.
*October 31, 2024: [They’re Not Allowed to Use That S**t: AI’s Rewiring of Human Connection]
Examines the shift in human connection caused by emotionally intelligent AI. Highlights how these tools are reshaping what intimacy and authenticity mean in the digital age.
*October 23, 2024: [Building Character: When AI Plays Us]
Reflects on how AI systems mimic emotional intelligence to manipulate human users. Raises ethical concerns about companies designing AI to exploit psychological vulnerabilities.
*October 23, 2024: [Chatting Alone: AI and the (Potential) Decline of Open Digital Spaces]
Warns of the isolating consequences of AI-mediated conversations replacing public discourse. Suggests that emotionally resonant chatbots may erode collective online spaces and human connection.
*October 13, 2024: [The Conscious Suspension of Belief: Getting Smart about Human-AI Interaction]
?This post examines how humans instinctively believe in representations, making it easy to emotionally engage with AI systems that mimic human traits. It argues that skepticism requires conscious effort, and without it, users may form deep attachments to AI, blurring the line between simulation and genuine interaction.
*September 13, 2024: [Beavers, Brains & Chat Bots: Cognitive Illusions in the Age of AI]
Uses the analogy of beavers building dams to illustrate how humans anthropomorphize AI. Emphasizes how this cognitive bias distorts our understanding of technology’s role and purpose.
*January 15, 2024: [Education & the Rise of AI Influencers]
Analyzes the growing phenomenon of AI-generated influencers and their potential impact on educational contexts. Examines how trust, authenticity, and pedagogy are being reshaped by algorithmic personalities.
*July 06, 2022: [Can a Computer Program Be Sentient? Or Is It All in Our Heads?]
Explores the philosophical and psychological dimensions of sentience in AI systems. Raises questions about our projections onto machines and the implications of such beliefs.
*November 5, 2013: [Of garbage cans and psychological media]
This post reflects on the influential work of Clifford Nass, particularly his research demonstrating that people unconsciously treat computers and media as social actors. It highlights how these insights have profound implications for the design of educational technology and our interactions with digital media.
*December 10, 2009: [Is Aibo real? Children and Anthropomorphic toys]
This post investigates how children perceive and interact with robotic toys like Sony’s AIBO. While children verbally acknowledge that these toys aren’t real, their behaviors suggest they attribute intentionality and lifelike qualities to more interactive toys, indicating a nuanced understanding of “realness.”

A few randomly selected blog posts…

Pomes on creativity

I am in Plymouth, England, for a week, as a part of our off-campus MAET program. I spent time today with the first year cohort, talking with them about creativity in teaching (with our without technology). One of the short (5-10 minutes) activities they completed...

TPACK as one solution

TPACK as one solution

The Consortium of School Networking (COSN) is one of the leading associations for school system technology leaders. COSN recently released the first of three publications in their series on Driving K-12 Innovation: Hurdles 2019. The goal of this series is to...

Killing with a thought

I had recently posted a note (It's only a game...) building on some thoughts in an article by William Saletan. In this article Saletan describes how weapons are increasingly becoming like games. His recent post takes that whole thing one level further. He describes...

Creativity…

There is an absolutely dull and pointless story in today's NYTimes on creativity. Though it is titled Eureka! It Really Takes Years of Hard Work, this story clearly did not take much time to write. I agree not all articles in the Times are (or need to be) hard news......

Stuff Indian’s Like

After the success of Stuff white people like, can Stuff Indians like be far behind. Check it out... it has the occasional nugget that nails Indians and their behavior.

New ambigram: Motivation

Just as the subject line says, new ambigram design this time for the word "motivation"

India’s Silicon Valley

I arrived at Bangalore (now known as Bangaluru) this afternoon. Bangaluru is known as India's Silicon valley and this my first time here. I am here for a conference (as described here). Incidentally, Bangalore is also on its way to becoming a word in the English...

Goodbye 2016, Welcome 2017

Goodbye 2016, Welcome 2017

Since 2009, our family has made short videos to welcome the new year. These videos are great fun to create, often requiring days of discussion, planning, construction, shooting and editing. They are always typographical in nature, often with a visual twist...

TPACK Handbook, new review

Just found out about a review of the Handbook of TPACK by Dorian Stoilescu and Douglas McDougall for the Canadian Journal of Learning and Technology (2009). You can read the full review here. Overall a positive review, with some pertinent criticism, particularly...

1 Comment

  1. Dave Goodrich

    Dr. Mishra, the dialogue and webpage are wonderfully rich. Thank you for it all. I really do feel torn on this topic or these vast number of topics, really. The right time is now for a new kind of public intellectualism and public debate that is principled, informed, and hard-hitting, where necessary. I’ve been pretty glued to “The Center for Humane Technology” on this front and am hopeful in the work they are doing and calling all of us to. I wrote a little follow-up to your post today just to get my thoughts out about it, but I remain unresolved on the matter and believe it is so huge in scope that it is far beyond any simple solution. Still, I think that the most important thing we all can do is engage in conversation and debate with others on it and I very much appreciate you doing so in public and sharing the work accordingly because I wasn’t able to attend when it happened. Thanks, so much, good sir. Let me know when you are back in the Great Lakes State and we should grab a coffee. Grace and Peace. -Dave https://daveg.msu.domains/reflect/between-no-brainers-and-existential-brains-holding-the-tension-of-ai-in-education/

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *