Once in a while, you come across a piece of writing that doesn’t just make you think—it makes you rethink. It rearranges the furniture in your head, putting things together in ways you hadn’t considered but now can’t unsee.
Charles Stross’s essay, “Dude, You Broke the Future,” was one of those pieces for me. (For those interested, he has another post, Artificial Intelligence: Threat or Menace? which digs into similar ideas.)
In it, Stross makes a provocative analogy between paperclip maximizers (a thought experiment in AI ethics), corporations, and artificial intelligence. Just to give some context, the paperclip maximizer imagines an AI programmed with a single goal: maximize paperclip production—which it proceeds to do by converting all available matter in the universe into paperclips. Stross suggests that we don’t need AI for this to happen. It has already happened. Corporations, he argues, are already paperclip maximizers and we live in a world that they have transformed.
The framing is at once simple and profound: corporations, like paperclip-maximizing AIs, pursue a single objective (profit) with ruthless, blind efficiency, regardless of the collateral damage.
The essay made me pause and rethink how we perceive these entities that dominate our social, economic, and political lives. More importantly, it coalesced with some ideas I’ve been exploring in my own writing.
I’ve written previously about the nature of generative AI and whether history will repeat itself or just rhyme. In that piece, I reflected on the lessons we should—but likely won’t—learn from the social media revolution. I argued that new technologies, like AI, must be understood not just as tools but as part of a broader socio-technical-cultural world. Similarly, in “When Tools Become Culture,” I explored how technologies such as clocks and standardized time fundamentally redefined how we perceive and organize the world.
These tools not only altered our understanding of time but also exemplify how technologies function as both products of their inherent affordances and the broader socio-technical-cultural systems in which they are embedded. Their impact is never purely inherent or purely external. A tool like generative AI comes with built-in affordances that shape how it is used—but it also exists within a context that influences and amplifies those effects. This interplay is crucial to understanding the double-edged nature of such technologies: they can disrupt and redefine, but they also reflect and reinforce the values of the systems into which they are introduced.
Stross’s essay ties neatly into these themes by suggesting that corporations have become something more: cultural technologies in their own right, shaping our collective consciousness in ways we seldom interrogate.
Note: I am fully aware that one criticism of this framing is its use of intentional language to describe corporations—attributing to them desires, goals, and decision-making as if they were conscious entities. This isn’t meant to echo Romney’s ‘corporations are people’ stance, but rather reflects, as I have argued elsewhere, the limitations of our available metaphors. When confronted with decision-making complex systems—whether self-driving cars, AI or corporations—we often must resort to intentional language simply because we lack better alternatives. These anthropomorphic metaphors, while imperfect, help us grasp and reason about behaviors that emerge from intricate, interconnected processes that defy simpler description. There are other more fundamental reasons (beyond the limitations of language) that I have also examined (as in this post: Beavers, brains & chatbots: Cognitive illusions in the age of AI).
The analogy of corporations as AIs is as unsettling as it is illuminating. Stross argues that corporations are, in essence, algorithmic entities—black boxes with one directive: maximize shareholder value. Like AI systems, corporations can be seen as complex decision-making systems whose inner workings are often opaque—another kind of black box whose outputs we observe but whose internal logic remains obscure.
They operate with a terrifying efficiency, innovating and adapting not out of malice or intent, but because their very survival depends on it. Consider the release of ChatGPT by OpenAI back in November of 2022. The world, as far as I know, wasn’t clamoring for a chatbot, but OpenAI rushed to release it to secure first-mover advantage, with little consideration for unleashing a half-baked technology into an already fraught information landscape. (We see this pattern continuing as OpenAI pivots to become a for-profit corporation.)
Like an AI tasked with maximizing paperclips, a corporation will decimate forests, manipulate political systems, and exploit workers because these actions align with its single-minded purpose. Social media platforms, seeking to maximize engagement, will leverage dark psychology techniques—exploiting our cognitive biases, promoting outrage, and fueling division—because these methods align with their core objective of keeping users hooked and generating ad revenue.
The brilliance of this analogy lies in how it shifts our perspective: corporations are not just run by people; they run over people. They are decision-making systems—or, if you like, AIs—that have slipped the leash of their creators, optimizing themselves at our expense.
What makes this insight so powerful is that it offers a provocative lens through which to understand many of our debates about corporate ethics or “conscious capitalism.” Stross’s analogy allows us to see corporations as entities driven by singular goals, much like a paperclip maximizer—relentlessly pursuing profit without regard for broader consequences.
This framing suggests that expecting a corporation to act ethically may be akin to expecting a paperclip maximizer to stop short of turning the planet into paperclips. It’s a compelling way to think about the limits of corporate responsibility. And while governments and regulations are intended to act as safeguards, the speed and scale at which these entities operate often leave such mechanisms struggling to keep pace.
It’s a bleak but necessary realization: the systems we’ve built are fundamentally misaligned with human flourishing, and tweaking at the edges won’t change that.
As an educator and researcher, I’ve been immersed in the ongoing hype about how AI will revolutionize education. I’ve argued before that whether or not AI transforms the classroom itself, it will inevitably reshape the world in which classrooms operate.
In his essay, Charles Stross draws a parallel between electric vehicles (EVs) and the concept of the paperclip maximizer, suggesting that EVs function as “battery maximizers”—machines optimized primarily to serve the interests of battery manufacturers. This analogy underscores how technologies, when driven by singular objectives, can lead to unintended consequences.
So if we ask the question—what does AI want? We get the answer that it has an insatiable need for data. And that need, in the realm of education, will mean increased emphasis on data-driven educational practices—what we will euphemistically call “personalized learning.” And this is already happening. The need for data (and more data) to train its models has already begun to change how we think about and talk about learning.
I want to thank Charles Stross for helping me think further and deeper about these issues. His essay provides a lens through which to view the systems around us—and the lens is sharp, incisive, and unflinchingly honest.
That said, I find myself ending on a more pessimistic note. If these corporate entities are indeed runaway algorithms, then any meaningful rupture or resistance will not come from within the system.
It will emerge at the margins, in small niches where alternative ways of being can take root. That’s the best we can hope for – to create cracks along its edges where something new might grow.
0 Comments