I am a huge fan of Arthur Brooks’ regular columns in The Atlantic where he writes about meaning, love, and happiness, with twin goals, “to understand these parts of life more deeply, and impart to others whatever understanding I can glean.” I appreciate his insights and the manner in which he connects deep philosophical traditions with social science research, always with the goal of informing us about what it means to lead a good life.
What he usually does not talk about is AI.
That said, one of his more recent columns (Why Wittgenstein Was Right About Silence) got me thinking about GenAI. And one thought led to another and here we are… a post that connects Silicon Valley’s latest creation to some key figures and ideas from 20th century philosophy.
In his column Brooks focuses on a quote by Wittgenstein about how language can never convey the fullest understanding of life: “The limits of my language,” wrote Wittgenstein, “mean the limits of my world.” Similarly, the last sentence of his book Tractatus Logico-Philosophicus was the simple statement, “Whereof one cannot speak, thereof one must be silent.”
A lot of academic ink has been spilled in trying to understand what Wittgenstein meant, which is ironic in and of itself. My personal take on this is that Wittgenstein is trying (with language) to capture the profound limitations of language—the recognition that some truths lie beyond the reach of words, and that wisdom sometimes demands restraint. And silence. It is his version of Gödel’s Incompleteness theorem for life and language—that there are truths that cannot be expressed by a formal language.

Now let’s consider Large Language Models (LLMs).
LLMs, on the other hand, represent the complete inversion of Wittgenstein’s insight. They operate on the fundamentally opposite assumption—that everything can and must be spoken. They never cease speaking. They cannot be silent. Where Wittgenstein advocated for the humility of silence in the face of the ineffable, LLMs offer endless verbosity without understanding or connection to reality.
Now Wittgenstein was not the only philosopher who came to mind. If we move from Austrian philosophy to French theory, we encounter the postmodernists who developed theories about textuality that seem equally relevant. Jacques Derrida and others argued that texts only refer to other texts, with no tether to external reality. Roland Barthes declared “the author is dead,” suggesting we shouldn’t search for intentionality behind words on the page. Julia Kristeva spoke of intertextuality—the idea that all texts are mosaics of other texts, woven from countless previous writings rather than emerging from original thought.
LLMs are the perfect realization of Kristeva’s vision. They literally construct responses by recombining patterns from millions of existing texts, creating what Baudrillard might call simulacra—copies without originals, representations that have no grounding in direct experience or reality. These systems embody Foucault’s insight that language doesn’t just describe reality but creates it—LLMs generate convincing accounts of events they never witnessed, explanations of concepts they don’t understand, and authoritative-sounding responses about subjects that may not even exist. In doing so, they fulfill Lyotard’s prediction about the breakdown of grand narratives and unified theories of truth. When language becomes purely self-referential, when text refers only to other text, the very possibility of grounding knowledge in shared reality begins to dissolve. They are systems of pure intertextuality.
LLMs are, in this sense, perfect postmodern machines and the world they create is the postmodern nightmare. As I’ve written before, LLMs are bullshit artists, building on philosopher Harry Frankfurt’s evocative turn of phrase. They are language generators that spout words without any regard for truth. In this postmodern world we can no longer distinguish truth from falsehood, knowledge from pattern-matching.
There’s a delicious irony here. Postmodernism is often dismissed by the tech world as wishy-washy humanities nonsense—the kind of impractical theorizing that happens in ivory towers while real engineers build useful things. Yet in creating LLMs, technologists have built the perfect embodiment of postmodern theory. They’ve constructed machines where meaning emerges only from patterns of other texts, where there is no stable ground of truth or reference to the real world.
The postmodernists’ “nightmare” vision of language unmoored from truth has become Silicon Valley’s breakthrough product.
But sadly, what these endlessly verbose machines haven’t learned is when to remain silent. I began this piece with this quote from Wittgenstein: “The limits of my language mean the limits of my world.” Maybe, in our rush to build machines that can say anything, we may have forgotten that sometimes the best answer is no answer at all.
0 Comments