Corporations as Paperclip Maximizers: AI, Data, and the Future of Learning

by | Sunday, January 05, 2025

Once in a while, you come across a piece of writing that doesn’t just make you think—it makes you rethink. It rearranges the furniture in your head, putting things together in ways you hadn’t considered but now can’t unsee.

Charles Stross’s essay, Dude, You Broke the Future, was one of those pieces for me. (For those interested, he has another post, Artificial Intelligence: Threat or Menace? which digs into similar ideas.)

In it, Stross makes a provocative analogy between paperclip maximizers (a thought experiment in AI ethics), corporations, and artificial intelligence. Just to give some context, the paperclip maximizer imagines an AI programmed with a single goal: maximize paperclip production—which it proceeds to do by converting all available matter in the universe into paperclips. Stross suggests that we don’t need AI for this to happen. It has already happened. Corporations, he argues, are already paperclip maximizers and we live in a world that they have transformed.

The framing is at once simple and profound: corporations, like paperclip-maximizing AIs, pursue a single objective (profit) with ruthless, blind efficiency, regardless of the collateral damage.

The essay made me pause and rethink how we perceive these entities that dominate our social, economic, and political lives. More importantly, it coalesced with some ideas I’ve been exploring in my own writing.

I’ve written previously about the nature of generative AI and whether history will repeat itself or just rhyme. In that piece, I reflected on the lessons we should—but likely won’t—learn from the social media revolution. I argued that new technologies, like AI, must be understood not just as tools but as part of a broader socio-technical-cultural world. Similarly, in When Tools Become Culture, I explored how technologies such as clocks and standardized time fundamentally redefined how we perceive and organize the world.

These tools not only altered our understanding of time but also exemplify how technologies function as both products of their inherent affordances and the broader socio-technical-cultural systems in which they are embedded. Their impact is never purely inherent or purely external. A tool like generative AI comes with built-in affordances that shape how it is used—but it also exists within a context that influences and amplifies those effects. This interplay is crucial to understanding the double-edged nature of such technologies: they can disrupt and redefine, but they also reflect and reinforce the values of the systems into which they are introduced.

Stross’s essay ties neatly into these themes by suggesting that corporations have become something more: cultural technologies in their own right, shaping our collective consciousness in ways we seldom interrogate.

Note: I am fully aware that one criticism of this framing is its use of intentional language to describe corporations—attributing to them desires, goals, and decision-making as if they were conscious entities. This isn’t meant to echo Romney’s ‘corporations are people’ stance, but rather reflects, as I have argued elsewhere, the limitations of our available metaphors. When confronted with decision-making complex systems—whether self-driving cars, AI or corporations—we often must resort to intentional language simply because we lack better alternatives. These anthropomorphic metaphors, while imperfect, help us grasp and reason about behaviors that emerge from intricate, interconnected processes that defy simpler description. There are other more fundamental reasons (beyond the limitations of language) that I have also examined (as in this post: Beavers, brains & chatbots: Cognitive illusions in the age of AI).

The analogy of corporations as AIs is as unsettling as it is illuminating. Stross argues that corporations are, in essence, algorithmic entities—black boxes with one directive: maximize shareholder value. Like AI systems, corporations can be seen as complex decision-making systems whose inner workings are often opaque—another kind of black box whose outputs we observe but whose internal logic remains obscure.

They operate with a terrifying efficiency, innovating and adapting not out of malice or intent, but because their very survival depends on it. Consider the release of ChatGPT by OpenAI back in November of 2022. The world, as far as I know, wasn’t clamoring for a chatbot, but OpenAI rushed to release it to secure first-mover advantage, with little consideration for unleashing a half-baked technology into an already fraught information landscape. (We see this pattern continuing as OpenAI pivots to become a for-profit corporation.)

Like an AI tasked with maximizing paperclips, a corporation will decimate forests, manipulate political systems, and exploit workers because these actions align with its single-minded purpose. Social media platforms, seeking to maximize engagement, will leverage dark psychology techniques—exploiting our cognitive biases, promoting outrage, and fueling division—because these methods align with their core objective of keeping users hooked and generating ad revenue.

The brilliance of this analogy lies in how it shifts our perspective: corporations are not just run by people; they run over people. They are decision-making systems—or, if you like, AIs—that have slipped the leash of their creators, optimizing themselves at our expense.

What makes this insight so powerful is that it offers a provocative lens through which to understand many of our debates about corporate ethics or “conscious capitalism.” Stross’s analogy allows us to see corporations as entities driven by singular goals, much like a paperclip maximizer—relentlessly pursuing profit without regard for broader consequences.

This framing suggests that expecting a corporation to act ethically may be akin to expecting a paperclip maximizer to stop short of turning the planet into paperclips. It’s a compelling way to think about the limits of corporate responsibility. And while governments and regulations are intended to act as safeguards, the speed and scale at which these entities operate often leave such mechanisms struggling to keep pace.

It’s a bleak but necessary realization: the systems we’ve built are fundamentally misaligned with human flourishing, and tweaking at the edges won’t change that.


As an educator and researcher, I’ve been immersed in the ongoing hype about how AI will revolutionize education. I’ve argued before that whether or not AI transforms the classroom itself, it will inevitably reshape the world in which classrooms operate.

In his essay, Charles Stross draws a parallel between electric vehicles (EVs) and the concept of the paperclip maximizer, suggesting that EVs function as “battery maximizers”—machines optimized primarily to serve the interests of battery manufacturers. This analogy underscores how technologies, when driven by singular objectives, can lead to unintended consequences.

So if we ask the question—what does AI want? We get the answer that it has an insatiable need for data. And that need, in the realm of education, will mean increased emphasis on data-driven educational practices—what we will euphemistically call “personalized learning.” And this is already happening. The need for data (and more data) to train its models has already begun to change how we think about and talk about learning.

I want to thank Charles Stross for helping me think further and deeper about these issues. His essay provides a lens through which to view the systems around us—and the lens is sharp, incisive, and unflinchingly honest.

That said, I find myself ending on a more pessimistic note. If these corporate entities are indeed runaway algorithms, then any meaningful rupture or resistance will not come from within the system.

It will emerge at the margins, in small niches where alternative ways of being can take root. That’s the best we can hope for – to create cracks along its edges where something new might grow.

A few randomly selected blog posts…

EPET at SITE, 2014

SITE2014 (the annual conference of the Society of Information Technology in Teacher Education) is being held in Jacksonville, Florida starting the 17th of March. As always, the Educational Psychology and Educational Technology program at MSU has a significant presence...

(de)Signs, a series on Slate

Slate magazine is running an interesting series by Julia Turner on signs and their design. Two articles are now up The Secret Language of Signs: They're the most useful thing you pay no attention to. Start paying attention. Lost in Penn Station: Why are the signs at...

Going back home

Amita Chudgar, friend and colleague, just sent me this really nice article in today's NYTimes, titled "India Calling" about the second generation of Indian Americans who are now going back to India. These are kids born and brought up in the US, whose parents had...

The distance education revolution

TCRecord this week features an article by Gary Natriello titled Modest Changes, Revolutionary Possibilities: Distance Learning and the Future of Education. As the abstract says In this essay, I take stock of the developments shaping distance learning and consider the...

TPACK Newsletter #39, February 2019

TPACK Newsletter #39, February 2019

Here is the latest pdf version of the TPACK Newsletter (#39, February 2019), as curated and shared by Judi Harris and her team. (Previous issues are archived here.) This issue includes 31 articles, 2 books, 39 chapters, and 14 dissertations that have not appeared...

Creativity in Teaching & Learning @ Mizzou

Creativity in Teaching & Learning @ Mizzou

I was recently invited to conduct a workshop for the Celebration of Teaching Conference at the University of Missouri around Creativity in Teaching and Learning. This was my first time at Columbia, MO and the conference organizers were wonderful. I did two versions of...

Funny TPACK mashups, the Aussie way

TPACK is huge in Australia (for instance see this note TPACK underpins Aussie Teacher Ed Restructuring). I am hopeful that one of these days this interest will translate into a trip down-under... It would be great to travel around the continent, giving talks, meeting...

NEW BOOK! Creativity, Technology & Education

NEW BOOK! Creativity, Technology & Education

I am thrilled to announce the publication of a new book, a Mishra-Henriksen production titled Creativity, Technology & Education: Exploring their Convergence.  This book is a collection of essays that first appeared the...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *