Google’s recent release of NotebookLM has stirred up quite a buzz, particularly its podcast feature. At first glance, it might not seem revolutionary—after all, we’ve had AI tools that can engage with uploaded documents for a while now. And it does not require much fancy prompting to create a dialogue which then be converted into an MP3 with fake AI generated voices. At some level, it is just automating a sequence of tasks that could have been done anyway, just making it easier.
But there’s something about this particular implementation that has captured people’s imagination. So, what’s the secret sauce?
The answer, I believe, lies not in the technology itself, but in how it taps into our psychological tendencies. As I’ve argued in my previous post, “Turing’s Tricksters: How AI Hijacks Our Social Instincts,” we humans are hardwired to see intentionality and agency even in the simplest of stimuli. NotebookLM’s podcast feature exploits this tendency masterfully.
What sets NotebookLM apart is the uncanny valley-leaping realism of the conversation between the AI hosts. The banter between AI hosts, the pregnant pauses, the “um”s and “ah”s—it’s like eavesdropping on a coffee shop conversation between two particularly well-read friends. All these elements create a powerful illusion of psychological reality. It’s not just a robotic recitation of facts; it’s a dynamic, seemingly organic dialogue that our brains interpret as a real conversation between thinking beings. It is catnip to our stone-age brains, ever eager to find faces in clouds and voices in the wind.
And I get it, I really do. The emotional connection, the feeling of having a knowledgeable friend explain complex topics, is incredibly powerful. Just listen to one and you will know what I mean. Cognitively I may know this is as algorithms at work – but emotionally and psychological it feels real.
If you haven’t listened to one, check out the one embedded below.
I gave NotebookLM the first article of a series (Of art and math) that Gaurav Bhatnagar and I had written for the math education magazine “At Right Angles.” In this series we explored everything from symmetry to self-similarity, fractals to paradoxes. I uploaded the first article in the series and asked NotebookLM to create a podcast.
It’s pretty insane how good it is – how real the conversation feels.
(To those interested in learning more about ambigrams or to read our articles – I have embedded them at the end of this post. I have also embedded another podcast episode that NotebookLM created, this time from the entire series. That one goes deeper in the math and also does some interesting and weird things. For instance, it spells out the word “is” as separate letters multiple times – totally breaking the illusion for a moment. It also, at the end comes up with a fascinating challenge for the listener—something that was not in our original prose)
Interestingly, the occasional errors or “hallucinations” in these AI-generated podcasts might actually work in their favor. As I noted in “Cats on the moon: How OpenAI, Google, Meta, Microsoft & Apple are Dealing with Hallucinations,” these quirks can make each conversation feel unique and emergent, further enhancing the illusion of authenticity.
This psychological realism is reminiscent of what made ChatGPT such a sensation when it was first released around 2 years ago. It wasn’t just its ability to process language, but its capacity to engage in natural conversation, remember context, and adapt its responses. NotebookLM’s podcast feature takes this a step further by adding auditory cues that our brains associate with human conversation.
Many educators and technologists are touting this as a revolutionary tool for making books more accessible, turning lectures into fireside chats. And I understand the appeal—it’s incredibly realistic, as if you’re listening to real people discuss complex topics. The emotional connection it fosters is indeed magical.

However, as an educator, I’m skeptical of its educational uses and see some potential pitfalls. While it may make content more engaging, as we know AI-generated explanations may be inaccurate or oversimplified. More importantly, they offer us a convenient choice away from the “hard work” of learning, of truly engaging with texts, and ideas. Finally, the very realism that makes it appealing could also make it harder for us to critically evaluate the information being provided.
In the end, the success of NotebookLM’s podcast feature is as much about the technology as it is about us. It’s a testament to how well we can now program AI to exploit our cognitive biases and social instincts. We need to recognize when AI is pushing our psychological buttons, and what that means for education, information dissemination, and our understanding of reality itself.
In the age of AI, the most important operating system to understand isn’t running on silicon chips—it’s the one between our ears.
If you are interested in the ideas of how ambigrams combine math and art I am embedding the entire series of articles below – as well as a podcast episode created by NotebookLM based on the entire series of articles.
A podcast episode created by NotebookLM based on the entire series. There are a few glitches, but it does get deeper into the math – which is interesting.
0 Comments