Corporations as Paperclip Maximizers: AI, Data, and the Future of Learning

by | Sunday, January 05, 2025

Once in a while, you come across a piece of writing that doesn’t just make you think—it makes you rethink. It rearranges the furniture in your head, putting things together in ways you hadn’t considered but now can’t unsee.

Charles Stross’s essay, Dude, You Broke the Future, was one of those pieces for me. (For those interested, he has another post, Artificial Intelligence: Threat or Menace? which digs into similar ideas.)

In it, Stross makes a provocative analogy between paperclip maximizers (a thought experiment in AI ethics), corporations, and artificial intelligence. Just to give some context, the paperclip maximizer imagines an AI programmed with a single goal: maximize paperclip production—which it proceeds to do by converting all available matter in the universe into paperclips. Stross suggests that we don’t need AI for this to happen. It has already happened. Corporations, he argues, are already paperclip maximizers and we live in a world that they have transformed.

The framing is at once simple and profound: corporations, like paperclip-maximizing AIs, pursue a single objective (profit) with ruthless, blind efficiency, regardless of the collateral damage.

The essay made me pause and rethink how we perceive these entities that dominate our social, economic, and political lives. More importantly, it coalesced with some ideas I’ve been exploring in my own writing.

I’ve written previously about the nature of generative AI and whether history will repeat itself or just rhyme. In that piece, I reflected on the lessons we should—but likely won’t—learn from the social media revolution. I argued that new technologies, like AI, must be understood not just as tools but as part of a broader socio-technical-cultural world. Similarly, in When Tools Become Culture, I explored how technologies such as clocks and standardized time fundamentally redefined how we perceive and organize the world.

These tools not only altered our understanding of time but also exemplify how technologies function as both products of their inherent affordances and the broader socio-technical-cultural systems in which they are embedded. Their impact is never purely inherent or purely external. A tool like generative AI comes with built-in affordances that shape how it is used—but it also exists within a context that influences and amplifies those effects. This interplay is crucial to understanding the double-edged nature of such technologies: they can disrupt and redefine, but they also reflect and reinforce the values of the systems into which they are introduced.

Stross’s essay ties neatly into these themes by suggesting that corporations have become something more: cultural technologies in their own right, shaping our collective consciousness in ways we seldom interrogate.

Note: I am fully aware that one criticism of this framing is its use of intentional language to describe corporations—attributing to them desires, goals, and decision-making as if they were conscious entities. This isn’t meant to echo Romney’s ‘corporations are people’ stance, but rather reflects, as I have argued elsewhere, the limitations of our available metaphors. When confronted with decision-making complex systems—whether self-driving cars, AI or corporations—we often must resort to intentional language simply because we lack better alternatives. These anthropomorphic metaphors, while imperfect, help us grasp and reason about behaviors that emerge from intricate, interconnected processes that defy simpler description. There are other more fundamental reasons (beyond the limitations of language) that I have also examined (as in this post: Beavers, brains & chatbots: Cognitive illusions in the age of AI).

The analogy of corporations as AIs is as unsettling as it is illuminating. Stross argues that corporations are, in essence, algorithmic entities—black boxes with one directive: maximize shareholder value. Like AI systems, corporations can be seen as complex decision-making systems whose inner workings are often opaque—another kind of black box whose outputs we observe but whose internal logic remains obscure.

They operate with a terrifying efficiency, innovating and adapting not out of malice or intent, but because their very survival depends on it. Consider the release of ChatGPT by OpenAI back in November of 2022. The world, as far as I know, wasn’t clamoring for a chatbot, but OpenAI rushed to release it to secure first-mover advantage, with little consideration for unleashing a half-baked technology into an already fraught information landscape. (We see this pattern continuing as OpenAI pivots to become a for-profit corporation.)

Like an AI tasked with maximizing paperclips, a corporation will decimate forests, manipulate political systems, and exploit workers because these actions align with its single-minded purpose. Social media platforms, seeking to maximize engagement, will leverage dark psychology techniques—exploiting our cognitive biases, promoting outrage, and fueling division—because these methods align with their core objective of keeping users hooked and generating ad revenue.

The brilliance of this analogy lies in how it shifts our perspective: corporations are not just run by people; they run over people. They are decision-making systems—or, if you like, AIs—that have slipped the leash of their creators, optimizing themselves at our expense.

What makes this insight so powerful is that it offers a provocative lens through which to understand many of our debates about corporate ethics or “conscious capitalism.” Stross’s analogy allows us to see corporations as entities driven by singular goals, much like a paperclip maximizer—relentlessly pursuing profit without regard for broader consequences.

This framing suggests that expecting a corporation to act ethically may be akin to expecting a paperclip maximizer to stop short of turning the planet into paperclips. It’s a compelling way to think about the limits of corporate responsibility. And while governments and regulations are intended to act as safeguards, the speed and scale at which these entities operate often leave such mechanisms struggling to keep pace.

It’s a bleak but necessary realization: the systems we’ve built are fundamentally misaligned with human flourishing, and tweaking at the edges won’t change that.


As an educator and researcher, I’ve been immersed in the ongoing hype about how AI will revolutionize education. I’ve argued before that whether or not AI transforms the classroom itself, it will inevitably reshape the world in which classrooms operate.

In his essay, Charles Stross draws a parallel between electric vehicles (EVs) and the concept of the paperclip maximizer, suggesting that EVs function as “battery maximizers”—machines optimized primarily to serve the interests of battery manufacturers. This analogy underscores how technologies, when driven by singular objectives, can lead to unintended consequences.

So if we ask the question—what does AI want? We get the answer that it has an insatiable need for data. And that need, in the realm of education, will mean increased emphasis on data-driven educational practices—what we will euphemistically call “personalized learning.” And this is already happening. The need for data (and more data) to train its models has already begun to change how we think about and talk about learning.

I want to thank Charles Stross for helping me think further and deeper about these issues. His essay provides a lens through which to view the systems around us—and the lens is sharp, incisive, and unflinchingly honest.

That said, I find myself ending on a more pessimistic note. If these corporate entities are indeed runaway algorithms, then any meaningful rupture or resistance will not come from within the system.

It will emerge at the margins, in small niches where alternative ways of being can take root. That’s the best we can hope for – to create cracks along its edges where something new might grow.

A few randomly selected blog posts…

How to complete a half-marathon

I completed the Capital City River Run Half Marathon today. This race has become an annual event for me, this being my 7th outing so far (the first three years being a 10 mile run before it shifted into a half-maraton). [See links here, here and here]. I completed the...

A visit to Israel

A visit to Israel

I just got back from a trip to Israel. I was invited by the MEITAL 2019 conference and the Kibbutzim College of Education, Technology and the Arts. MEITAL is an organization of higher education institutions in Israel focusing on understanding and responding to local...

The ELIZA Effect-ion

The ELIZA Effect-ion

NOTE: This is a cross post with the Civics of Technology blog. I first read about the "ELIZA Effect" as a high-school student in India, in Douglas Hofstadter's classic rumination on art, music, humanity and AI—Gödel, Escher, Bach: An Eternal Golden Braid.. The...

Textbooks meet Bittorrent!

NYTimes article on how publishers are responding to the advent of peer-to-peer sharing of textbook files. Check out First It Was Song Downloads. Now It’s Organic Chemistry.

Bittersweet Thanksgiving

The recent events in Mumbai have thrown a pall over the Thanksgiving break. That said, this is a moment to celebrate friends and family. Let us spare a moment for all the innocent victims and their friends and family. Happy Thanksgiving! This image, above, captures...

Chris Fahnoe paper wins two awards at SITE

Chris Fahnoe is a doctoral student in our hybrid PhD program. As a part of his practicum research he conducted a study investigating whether students embedded in technology-rich, self-directed, open-ended learning environments develop self-regulation skills? We...

Silver Lining for Learning, a side conversation

Silver Lining for Learning, a side conversation

We have had a few regulars on the Silver Lining for Learning show. And one of them is Priyank Sharma who consistently joins us despite it being around 2 or 3 AM in in New Delhi when the show runs here in the US. Priyank and I spoke on the 22nd of June about a range of...

Creativity, genius & age

Malcolm Gladwell has a great essay in a recent New Yorker on the relationship between genius and age. It is popularly believed that genius is often tied up with precocity. There are two aspects to this. First, creativity requires the energy and brashness of youth....

From email to Istanbul: A 17-Year Journey

From email to Istanbul: A 17-Year Journey

Back in December 2008, I received an email from a graduate student at Yeditepe University in Turkey requesting me to serve on their dissertation committee. I did not respond to it right away—despite my attempt to respond to every email I get. Not sure why, maybe it...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *