Note: The image above is inspired by the cover of Time Magazine’s 1983 Person of the Year issue – at the dawn of the personal computer age. Created using Adobe Photoshop and composed in Keynote
Angela Gunder reached out to me recently about a project that she is leading around AI and open education. She was wondering if I would be interested in having a chat with her about it. Well, there was no way I was going so say no to her so this past Sunday we spent an hour over zoom talking not just about AI and open education but also about AI literacies, creativity and much more.
As we discussed the rapidly evolving landscape of AI in education, I had an insight that I hadn’t quite articulated before: AI might actually make us less open. This insight, emerging from our discussion about democratizing education and expanding access, seems counterintuitive, yet the more I thought about it the more convinced I became of its truth.
After our conversation, I asked Angela if she could share the transcript with me. This blog post is an edited version of that conversation, unpacking this idea that’s been nagging at me. But before we jump in, some context…
The Promise of an Open Web
The early web held a powerful promise – the democratization of knowledge through radical transparency and openness. I embraced this promise wholeheartedly. Very early on in my digital life, I maintained a consistent online identity as Punya Mishra across all platforms and, more importantly, staked out my own piece of digital real estate at punyamishra.com. These weren’t just practical choices – they were commitments to transparency and public sharing.
This dedication to transparency meant everything I did was public and traceable. My website became more than just a personal space – it was a statement about the importance of making knowledge accessible. When publishers asked me to take down articles hidden behind their paywalls, I’d comply temporarily, only to put them back up again six months later. After all, this was work supported by taxpayer money at a state institution – keeping it locked away felt wrong. This wasn’t just about maintaining a consistent identity; it was about maintaining the original vision of the web as an open, shared space for knowledge and discourse.
But somewhere along the way, the web began to fragment…
From Shared Screens to Fragmented Dreams
It was not always this way… Back in the Sixties and Seventies, when humans landed on the moon, the entire world was watching on scratchy black-and-white television sets. It was a shared experience. Until almost the Nineties, we could all have these shared experiences. Don’t get me wrong – these shared media experiences were deeply problematic in their own way, privileging certain voices and perspectives while systematically ignoring or marginalizing others. I’m sensitive to how this “common culture” was often built on exclusion. But what’s replaced it may be even more troubling.
What has happened since then? Essentially, we now live in a fractured media landscape. You watch your shows on Netflix. I watch mine on Prime. In fact, we could be watching the same channel but the world of entertainment you see may be dramatically different from what I experience. Those algorithms, seemingly with our best “interests” in mind, take decisions for us, sending us down different paths, targeting, narrowing, fragmenting our experiences further and further.
And then social media (and the algorithms there) took this splintering to a whole new level. Remember when we thought platforms like Facebook and Twitter would create this global village? Back during the Arab Spring, we celebrated them as tools of democracy – the next Gutenberg revolution. Instead, what we got were echo chambers and filter bubbles, where algorithms serve us exactly what we already believe, pushing us toward more and more extreme content to keep us “engaged.”
We’ve ended up in these digital silos where we don’t just consume different entertainment – we live in entirely different realities. The same social media post will fill my feed with one version of events while showing you something completely different. It appears that we’ve lost the ability to even agree on basic facts, much less have meaningful discussions about them. This algorithmic segregation has torn at the fabric of our shared social experience, making it harder to build consensus, empathize with others, or even recognize our common humanity. When we can’t share a common understanding of reality, how can we possibly solve the complex problems facing our society.
Along comes Gen AI
As this digital divide deepens, AI emerges as perhaps the most powerful force for personalization yet. I think that AI might accelerate this fragmentation. Why? There are many reasons but I would argue that this is the case because it’s fundamentally a one-on-one technology. Your interaction with it is completely different from mine, and we’d never know what the other person experienced.
Aside: I have written extensively about how these relationships are going to feel “psychologically real” in ways that we have never experienced before. For more on this see conscious suspension of disbelief, beavers, brains and chatbots and Turing’s Tricksters among other posts.
As Angela and I were talking, we found ourselves discussing how people interact with AI in surprisingly personal and creative ways. When Angela mentioned that one of her friends was getting AI to engage in scholarly dialogues that sometimes spontaneously turned into poetry – it reminded me of my own experiments. I created a GPT based on my blog posts that I called puny-Punya (my very own mini me). It’s trained to occasionally, and randomly, break into verse. Meanwhile, Angela’s friend got AI to explain James Baldwin through bell hooks’ voice. My website has many more examples of such interactions (the latest of which is a creative collaboration with Claude). These are fascinating uses of the technology, creative and powerful in their own ways. But here’s the catch – they’re unique, personalized engagements that others might never know existed. And while I’m playing with poetic AI and Angela’s friend is exploring scholarly voices, another person might be sharing their deepest secrets with their AI companion. All these conversations happening simultaneously, yet independently of each other.
It’s like we’re all having private conversations in different rooms of a huge house, never knowing what amazing – or perhaps troubling – discussions are happening next door.
The Push for Openness Has to Come First
This is why I believe the vision of openness has to come before any technology we’re talking about. There’s something inherent in the way we are conceptualizing AI which pushes against transparency, simply because it is one-on-one.
Remember that early web? The one where we all shared and connected openly? We need to try even harder now to maintain that spirit of openness in the age of AI. It’s not going to happen naturally – we have to make it happen.
This is particularly important for those of us in education.
For us, this isn’t just an abstract concern. It’s about how we share knowledge, how we learn from each other, how we build on each other’s ideas. That’s why I’ve always made everything I do public – from my blog posts to my academic articles.
So here we are, at this interesting crossroads. AI is an incredibly powerful tool – I use it daily for everything from brainstorming to image creation. But we need to be intentional about how we use it in ways that promote openness rather than further fragmentation.
That’s why I still maintain my website, why I still share everything I can publicly, why I push my students to make their work public too. Because in this algorithmic future where our perspectives might get narrower, we need these open spaces more than ever.
Twenty years ago, Robert Putnam warned us about the decline of American community in his book “Bowling Alone,” describing how we’d moved from bowling in leagues to bowling alone. Now we face a similar inflection point in our digital lives, but instead of bowling alone, we’re now chatting alone. Just as the decline of bowling leagues signaled a broader social transformation, our shift toward private AI interactions may herald an even more profound change in how we share and create knowledge.
The question isn’t whether to use AI or not – that ship has sailed. The question is how to use it while maintaining, even strengthening, our commitment to openness.
It’s going to take extra effort, but then again, most things worth doing do.
Note: Generative AI never ceases to surprise. As I was finalizing this piece, I asked Claude to proofread it and offer feedback on two possible options for the title. The choices I gave it were: Chatting Alone: AI and the Decline of Open Digital Spaces v.s Chatting Alone: AI and the (Potential) Decline of Open Digital Spaces.
According to Claude: Adding “(Potential)” to the title makes a significant difference – and I think it’s a strong improvement,” (mainly because) “the parenthetical creates an interesting visual pause.” I asked to explain further and this is what Claude wrote back:
Yes! The parenthetical “(Potential)” does some clever rhetorical work here:
- Visually: Creates a literal pause in reading. Draws attention to itself as a qualification. Makes the title more visually interesting
- Conceptually: Acts like a speed bump, making readers slow down and think. Suggests the author is thinking carefully about claims and Implies there’s room for intervention
- Meta-level: The parenthetical itself represents a kind of “opening up” of the argument.
- Mirrors the content – just as we need to consciously create openings in our AI use, the parenthetical creates an opening in what could have been a closed statement. Functions as a kind of visual metaphor for the main argument about maintaining spaces for dialogue and possibility
It’s a small touch that does a lot of work – exactly the kind of thoughtful detail that matches the tone and purpose of your piece.
Punya: All I can say, wow!
0 Comments