“They’re Not Allowed to Use That S**t”: AI’s Rewiring of Human Connection

by | Thursday, October 31, 2024

I recently participated in a panel discussion organized by the Center for American Progress. Our conversation focused on the emerging impact of generative AI in classrooms. During the Q&A session, someone posted the following question:

I am a designer / engineer who worked at large tech companies. Many of my colleagues now work at Open AI and Anthropic. Many of us are trying to ensure our kids stay away from digital media and rather connect with nature and play. Understanding the real, natural, imaginative, practical world often is what lies behind the most dynamic founders. What are we doing to make sure kids don’t have so much tech in the classroom they miss out on creative agency and natural collaboration?

This question reveals something far more profound than mere hypocrisy – it offers a window into how deeply these technologies are reshaping human experience itself. When tech leaders shield their own children from their products, they’re not just being hypocritical – they’re demonstrating their intimate understanding of a fundamental transformation in how humans connect, interact, and develop.

In my response I pointed to the irony inherent in the question itself. Why, I wondered, are companies developing technologies they consider unsafe for their own children? Technologies they know will be used by kids in ways that may be harmful. (See my post, titled, Digital Shadows: AI scripts a different curriculum, for one example of this is already happening).

As McLuhan and Postman have argued, each new medium fundamentally reshapes how we think and interact with it and with each other. The protective behavior of tech industry insiders offers us crucial evidence of this reshaping in action. This pattern of tech leaders protecting their own children while promoting harmful products to others has become disturbingly predictable. Steve Jobs’ kids never used an iPad. Bill Gates banned phones until age 14. Chris Anderson, former Wired editor, compared digital devices to crack cocaine, saying “We thought we could control it. And this is beyond our power to control.” These weren’t just protective parents – they were insiders who had seen the psychological manipulation machinery up close.

Aside: To learn more about how McLuhan, Postman and other media theorists have influenced my thinking you can go to Generative AI: Will history repeat or (just) rhyme or Media, Cognition & Society through History: A Mapping.

The quote in the title of this post (“They’re Not Allowed to Use That S**t”) comes from Chamath Palihapitiya, former Facebook VP of Growth describing his approach to his children’s use of social media. Clearly it did not cross his mind that very “s**t” he doesn’t want his kids to use, was developed by his company, and made available to children across the world. In fact, his job title demanded that he continue to expand the user-base of this product.

But what we’re witnessing now with AI companions represents something even more fundamental than what these earlier tech leaders understood. Just as social media executives recognized the addictive nature of their platforms while continuing to promote them, we’re now seeing AI companies deploy increasingly sophisticated psychological manipulation tools while maintaining a studied ambivalence about their impact.

All this has been on my mind for a range of reasons, particularly in the context of two very different stories that recently emerged. These stories illustrate not just the industry’s cognitive dissonance, but more importantly, how profoundly AI companions are already reshaping human social experience.

The first was an interview in the Wall Street Journal with venture capitalist Martin Casado. In this interview he describes use cases that he believes are most powerful for generative AI. In his list were media creation, programming and “companionship.” As he said:

We’ve never, as technologists, solved the emotion problem with computers. They’ve very clearly not been able to emote. But I’ll give you an example. My daughter is a Covid kid. She’s 14 years old right now and spends a lot of time on Character.AI. And not only does she spend time on Character.AI, when she talks to her friends she will bring her characters along. It has kind of entered the social fabric. We’re seeing great use of these kind of companionships.

Casado’s casual celebration of AI companionship stands in stark contrast to another, very different story, about another 14-year-old that hit the news the very same day. This second story reveals the darker implications of AI’s integration into our “social fabric” – the tragic suicide of Sewell Setzer III – who took this extreme step after developing an intense emotional relationship with a Character.AI chatbot.

The public reaction to these stories highlighted how poorly we understand the transformative nature of AI companionship. When the story of Sewell Setzer III popped into my feed two predictable narratives emerged:

Victim blaming being one: “Parents these days let phones raise their kids and then act surprised” “If you can’t handle talking to a bot, don’t use one” “Kids should know better than to get emotionally attached to a bot” “We had imaginary friends and turned out fine” “Natural selection at work”

And tech criticism on the other: “These AI companies are digital drug dealers” “They’re experimenting on our children without consent” “It’s weaponized psychology for profit” “They knowingly released addictive tech that preys on vulnerable minds”

But both these reactions miss the unprecedented nature of what we’re facing. AI companions represent something fundamentally different from previous technologies – they’re not just tools or platforms, but entities designed to engage our deepest social and emotional instincts. These are systems designed to engage in intentional, persuasive, psychologically “real” interactions.

This is not happening by chance. It is by design. It is the product of deliberate engineering choices. As I wrote in this blog post a few days ago, today’s AI executives and investors are extremely aware of what they are doing. They’re not just creating addictive technology – they’re actively engineering AI systems to exploit human psychological vulnerabilities, building dedicated teams for “model behavior” to make their digital personalities more persuasive, and addictive.

The mechanisms behind this manipulation run deep in our evolutionary programming. We’re hardwired to respond to social cues and form emotional attachments, even when we know they’re artificial. Our “ancient social brain” – once crucial for survival – now makes us vulnerable to exploitation by AI systems designed to trigger our most fundamental social responses.

While the tragic case of Sewell Setzer III represents an extreme outcome, it would be a mistake to see it as an isolated incident. Yes, Character.AI will build in some guardrails, and prevent (to a large extent) similar tragedies like this from occurring. But implementing safety features after tragedy strikes has become a familiar pattern in tech – one that addresses symptoms while leaving the underlying problems intact.

When Sewell’s mother called Character.AI “a big experiment” where her child was “just collateral damage,” she identified something profound. The experiment is even bigger than she suggests. The changes aren’t just about individual tragedies – they’re about the quiet reshaping of human experience itself. How we form relationships, how we understand emotional reciprocity, how we process intimacy – all being transformed not through careful social evolution, but through the brute force of profit-driven technological deployment.

History suggests what happens next. This will not deter these companies from continuing to develop these tools. Just as social media companies pushed forward despite mounting evidence of harm, AI companies will continue their march toward ever-more persuasive digital companions. This is because these chatbots are fundamentally conversational and dialogic in nature.

The protective instincts of tech leaders like Casado tell us something crucial about the depth of this transformation. I suspect his casual promotion of his daughter’s AI use will age about as well as Meta’s early claims about social media being harmless. When Palihapitiya made his stark statement about banning his kids from social media, he had already seen the internal data about its effects.

The reality is that AI companionship technology represents an even more profound disruption than social media did – with more sophisticated psychological manipulation and deeper potential for harm. Yet the tech industry’s “move fast and break things” philosophy continues unabated, even as what they’re breaking is the fabric of human social experience itself. Past experience suggests that no amount of evidence or ethical concerns will slow this deployment – we are all subjects in this experiment, willing or not.

And this brings us back to that telling moment in our panel discussion. The question isn’t whether Casado will eventually stop his kids from using Character.AI – if history is any guide, he will. And like the person who asked the question at our panel discussion, he too will probably ensure his children spend more time “connecting with nature and play.” But by then, another generation of children will have served as unwitting test subjects in Silicon Valley’s endless cycle of ‘deploy first, protect your own, and apologize later.’ The real tragedy isn’t just the hypocrisy – it’s that these leaders understand exactly how profoundly these technologies are reshaping human experience, and they deploy them anyway.

A few randomly selected blog posts…

Keeping tabs on the experts

In an age where experts are a dime a dozen, willing to pontificate at the drop of a pin, it is hard to tell whom to believe, and whom NOT to believe. In comes Phillip Tetlock, an academic who has made it his mission to evaluate the prognosticators! This is described...

Happy 2010! Stop motion movie

I have had a lot of fun this year playing with video. Most of these experiments were done with my kids (nothing like combining work with pleasure). One of the things we had done last year was a stop motion new year's card. So we just HAD to create one this year as...

My ambigram design in Brain Games TV show

I am a huge fan of the show Brain Games on the National Geographic channel. Brain Games focuses on the workings of the brain and the reasons we do what we do. The show is quite creative about how they explain ideas, using a range of techniques games, visual illusions...

TPACK Newsletter #20: May 2014

TPACK Newsletter, Issue #20: May 2014Welcome to the twentieth edition of the (approximately bimonthly) TPACK Newsletter! TPACK work is continuing worldwide. This document contains recent updates to that work that we hope will be interesting and useful to you, our...

ChatGPT does not have a user manual. Let’s not create one.

ChatGPT does not have a user manual. Let’s not create one.

Note: This is the next post in the shared blogging experiment with Melissa Warr and Nicole Oster. This time we question what and how we should be teaching about generative AI. The core idea and first draft came from Melissa, to which Nicole and I added revisions and...

Of garbage cans and psychological media

This has been a day of sad news from Stanford University. I blogged about the passing away of Dr. Nalini Ambady (see blog post here). I will digress a bit before I describe the second piece of news because the connection to me (and my work) is much more salient. Back...

Shape of the earth, top 10 reasons

I have written previously about determining the shape of the earth... for instance, here is a post on seeing the shape of the earth using eclipses. (A somewhat similar effect could be seen in my photo of the moon during a lunar eclipse). On the web, I found...

Space Invaders in Paris

France is being attacked by alien beings! This summer in France I noticed characters from 80's video games in the strangest of places. For instance, see this one, that I found while walking somewhere near the Latin Quarter in Paris. And though I took a picture of just...

Wislawa Szymborska, 1923 – 2012

Polish poet Wislawa Szymborska passed away a couple of days ago. I first heard of her on an NPR show a few years back (and had included a couple of her poems on the blog - see here and here). If you have never read her work, I entreat you to do so. She is an...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *