I found out a couple of days ago that the philosopher John Searle passed away on September 17, just a couple of weeks ago. Searle was a philosopher known for his work in the philosophy of language, philosophy of mind, and social philosophy. That said, he most known for his critique of artificial intelligence.
It was in the context of AI that I first heard of Searle. I was in high school when I came across (and devoured) Douglas Hofstadter’s Pulitzer prize winning book Godel, Escher, Bach: An Eternal Golden Braid. It was there, among Hofstadter’s playful dialogues and intricate analogies, that I first encountered Searle’s eponymous Chinese Room thought experiment, his theoretical argument against the viability of AI.
The setup of the argument was deceptively simple: a person who doesn’t understand Chinese sits in a room with a rulebook for manipulating Chinese symbols. Questions written in Chinese come through a slot, the person follows the rulebook’s instructions for which symbols to send back. And thus the person in the “Chinese room” converses with others outside, and to them, it appears as though the room (or at least the person inside) “understands” Chinese. Yet, Searle argued, it is clear that the person inside comprehends nothing, they are just following rules.

As you can imagine, this is a slightly tweaked version of the Turing test. Alan Turing argued that if we can’t distinguish between a conversation with a human or a machine, then the machine, in his famous words can “think.” Searle was throwing the gauntlet down, arguing that clearly the machine “the room with the person in it” did not engage in thinking at all.
Hofstadter himself was skeptical of Searle’s conclusions. He saw emergent meaning in complex systems that Searle denied. I remember finding Hofstadter’s optimism about artificial intelligence compelling, his belief that intelligence (and/or consciousness) might someday arise from the right arrangement of symbols and rules.
Yet decades later, I find myself thinking Searle might have been right all along.
Today’s Large Language Models are, in essence, Chinese Rooms made manifest, sophisticated pattern-matching systems manipulating tokens without understanding. These systems process, predict, and respond without comprehension. (True, LLMs rely on probabilistic processes while Searle’s room was strictly algorithmic, but the fundamental argument still holds.) Even Hofstadter, once so hopeful about emergent meaning, has admitted to being startled by these systems, astonished by their fluency, but troubled by their lack of grounding in real experience.
What we’re witnessing today is precisely what Searle described, but at an unprecedented scale and sophistication. And as LLM’s and their avatars spread across the world, it is clear that we are all living in some mega version of Searle’s Chinese room.
And as Searle argued, all rules, no meaning. All syntactics, no semantics. Which is why LLM’s are good at replicating style, and so bad with substance. They stay at the surface level of interaction, with little depth. (In fact I have argued that, in educational contexts, the outputs of LLMs are at best “curriculum shaped objects.”) They are, in Harry Frankfurt’s famous turn-of-phrase, essentially bullshit generators. The very idea of truth eludes them since they are not playing the truth game. They are following stochastic rules for generating the next word/phrase/token. That is all there is to it. There is no there there. No underlying understanding or connection to reality behind their fluent responses.
And yet, that’s only half the story.
The other half came from Joseph Weizenbaum and his program ELIZA from the 1960s. ELIZA, a simple pattern-matching therapist, revealed something profound about us humans, we can’t help but project meaning, intention, and understanding onto systems that merely simulate conversation. We fill in the semantic gaps even when we know better.
Turns out that meaning making happens at the other end of the human-machine interaction.
This creates a fascinating paradox in our daily interactions with AI. And so even though, when we chat with LLMs, we know with certainty that these words are being processed without understanding, essentially slipping Chinese characters through a slot, a part of us still feels a connection, still senses a conversation. The room’s true secret is not inside the machine at all—it’s in us. We are the engines of meaning, forever reading comprehension into empty manipulation.
So yes, Searle was right, but with a twist. The systems lack semantics, but we supply it. We create meaning even where none exists. The room doesn’t understand Chinese, but the people outside look at the responses and build meaning from them anyway.
The issue is not whether the Chinese room is intelligent, but the tension between systems that manipulate without meaning and humans who can’t help but infer it.
The Chinese Room lives. And so does ELIZA. The unsettling truth is that we, too, live inside it now, simultaneously the operators, the interpreters, and the willing believers. Knowing the truth doesn’t mean that we can resist the illusion.





0 Comments