I recently had the pleasure of returning to the Modem Futura podcast for a second conversation with hosts Andrew Maynard and Sean Leahy, and guess what, it was even more fun than the first time around.
What started as a discussion about the latest AI developments in education (OpenAI’s study mode, agentic AI, and the flood of educational AI tools) evolved into something much deeper: a fundamental questioning of what we mean when we talk about “better” learning. When AI companies promise to help students “become better,” we have to ask: what does that mean? Better at what? According to whose definition? And perhaps most importantly, who gets to decide?
This seemingly simple question reveals a critical flaw that often doesn’t receive sufficient attention. The promise of making learning “better” or “faster” or “more efficient” assumes there’s a predetermined endpoint, a specific version of human we’re trying to manufacture. But as we explored in our conversation, this mechanistic view misses something essential about what learning actually is.
Instead of using AI (or any other technology for that matter) for “becoming better” how about we focused on AI as a tool for “becoming?” Full stop.
This isn’t just semantic wordplay. The difference is profound. “Becoming better” implies optimization toward a predetermined goal, complete with metrics, benchmarks, and usually someone else’s definition of success.
“Becoming,” on the other hand, embraces the beautiful uncertainty of human development without trying to optimize them toward specific outcomes. It recognizes that learning is fundamentally about unfolding, not arriving at a destination.
Technology has always done that. Books and literature, cinema and art are all great examples of this. They don’t prescribe a path, they offer possibilities. AI if seen this way can be one more technology that can help us in this process of becoming.
When we optimize for “better,” we end up with surveillance systems disguised as personalization, gamification schemes that manufacture motivation where deep meaning should exist, and efficiency tools that squeeze out the very humanity that makes learning worthwhile.
When we start with “becoming,” we ask entirely different questions: How can AI respond to intrinsic motivation rather than trying to create it? How can these tools amplify what people already care about? How can technology support the unfolding of human curiosity rather than directing it toward predetermined outcomes?
To illustrate what this looks like in practice, I shared a deeply personal example during our conversation. Over the past month or so, I’ve been working with Claude to learn to read Odia, my native language. My mother was a celebrated writer with 17 published books, but I can only read one, the single volume published in English. Despite speaking Odia fluently, I never learned to read and write it.
I created an AI-powered learning tool that helps me work through Odia characters and words. It’s not gamified. There are no streaks or points or leaderboards. When Claude suggested adding those features, I declined immediately. Why? Because I was already motivated at the deepest possible level, to connect with my mother’s work.
This is what learning-for-becoming looks like. It starts with profound personal meaning. It emerges from genuine inquiry, my curiosity about my mother’s writing. It involves engaging in a dialogue with the AI to build this tool, especially since I don’t know programming. It requires construction, actually building a web-based tool customized entirely for my needs. It includes expression, my identity as my mother’s son finding new ways to manifest itself. I’m not trying to “master” Odia or achieve some certification. I’m simply becoming someone who can engage with this part of my personal history in a new way.
These elements (inquiry, construction, communication, and expression) are what John Dewey identified as the four primary impulses for learning, and they emerged naturally from following my own curiosity rather than any prescribed curriculum.
This was one of many insights that emerged from the conversation with Andrew and Sean. We also touched on the surveillance implications of agentic AI, how much of what’s being marketed as educational innovation is actually behaviorism at scale, and why the best learning technologies are often the most transparent and personally controlled ones. But the core insight about becoming versus becoming better felt like the thread that tied everything together.
What I love about moments like this is how they emerge from genuine dialogue. Just as my recent conversation about AI literacy pushed me to refine Myers’ definition by adding the word ‘developed’ (recognizing that literacy is an ongoing process rather than a fixed state) this discussion revealed something I hadn’t fully articulated about educational technology. In both cases, a single word shift unlocked new understandings. It took these particular conversations, with these particular partners, to identify and recognize the power of this shift.
Watch on YouTube (below) or listen on Modem Futura, episode 46, where we explore these themes alongside discussions of agentic AI, John Dewey’s natural impulses, and why the most powerful learning often happens when we stop trying to control the outcome.





0 Comments