Corporations as Paperclip Maximizers: AI, Data, and the Future of Learning

by | Sunday, January 05, 2025

Once in a while, you come across a piece of writing that doesn’t just make you think—it makes you rethink. It rearranges the furniture in your head, putting things together in ways you hadn’t considered but now can’t unsee.

Charles Stross’s essay, Dude, You Broke the Future, was one of those pieces for me. (For those interested, he has another post, Artificial Intelligence: Threat or Menace? which digs into similar ideas.)

In it, Stross makes a provocative analogy between paperclip maximizers (a thought experiment in AI ethics), corporations, and artificial intelligence. Just to give some context, the paperclip maximizer imagines an AI programmed with a single goal: maximize paperclip production—which it proceeds to do by converting all available matter in the universe into paperclips. Stross suggests that we don’t need AI for this to happen. It has already happened. Corporations, he argues, are already paperclip maximizers and we live in a world that they have transformed.

The framing is at once simple and profound: corporations, like paperclip-maximizing AIs, pursue a single objective (profit) with ruthless, blind efficiency, regardless of the collateral damage.

The essay made me pause and rethink how we perceive these entities that dominate our social, economic, and political lives. More importantly, it coalesced with some ideas I’ve been exploring in my own writing.

I’ve written previously about the nature of generative AI and whether history will repeat itself or just rhyme. In that piece, I reflected on the lessons we should—but likely won’t—learn from the social media revolution. I argued that new technologies, like AI, must be understood not just as tools but as part of a broader socio-technical-cultural world. Similarly, in When Tools Become Culture, I explored how technologies such as clocks and standardized time fundamentally redefined how we perceive and organize the world.

These tools not only altered our understanding of time but also exemplify how technologies function as both products of their inherent affordances and the broader socio-technical-cultural systems in which they are embedded. Their impact is never purely inherent or purely external. A tool like generative AI comes with built-in affordances that shape how it is used—but it also exists within a context that influences and amplifies those effects. This interplay is crucial to understanding the double-edged nature of such technologies: they can disrupt and redefine, but they also reflect and reinforce the values of the systems into which they are introduced.

Stross’s essay ties neatly into these themes by suggesting that corporations have become something more: cultural technologies in their own right, shaping our collective consciousness in ways we seldom interrogate.

Note: I am fully aware that one criticism of this framing is its use of intentional language to describe corporations—attributing to them desires, goals, and decision-making as if they were conscious entities. This isn’t meant to echo Romney’s ‘corporations are people’ stance, but rather reflects, as I have argued elsewhere, the limitations of our available metaphors. When confronted with decision-making complex systems—whether self-driving cars, AI or corporations—we often must resort to intentional language simply because we lack better alternatives. These anthropomorphic metaphors, while imperfect, help us grasp and reason about behaviors that emerge from intricate, interconnected processes that defy simpler description. There are other more fundamental reasons (beyond the limitations of language) that I have also examined (as in this post: Beavers, brains & chatbots: Cognitive illusions in the age of AI).

The analogy of corporations as AIs is as unsettling as it is illuminating. Stross argues that corporations are, in essence, algorithmic entities—black boxes with one directive: maximize shareholder value. Like AI systems, corporations can be seen as complex decision-making systems whose inner workings are often opaque—another kind of black box whose outputs we observe but whose internal logic remains obscure.

They operate with a terrifying efficiency, innovating and adapting not out of malice or intent, but because their very survival depends on it. Consider the release of ChatGPT by OpenAI back in November of 2022. The world, as far as I know, wasn’t clamoring for a chatbot, but OpenAI rushed to release it to secure first-mover advantage, with little consideration for unleashing a half-baked technology into an already fraught information landscape. (We see this pattern continuing as OpenAI pivots to become a for-profit corporation.)

Like an AI tasked with maximizing paperclips, a corporation will decimate forests, manipulate political systems, and exploit workers because these actions align with its single-minded purpose. Social media platforms, seeking to maximize engagement, will leverage dark psychology techniques—exploiting our cognitive biases, promoting outrage, and fueling division—because these methods align with their core objective of keeping users hooked and generating ad revenue.

The brilliance of this analogy lies in how it shifts our perspective: corporations are not just run by people; they run over people. They are decision-making systems—or, if you like, AIs—that have slipped the leash of their creators, optimizing themselves at our expense.

What makes this insight so powerful is that it offers a provocative lens through which to understand many of our debates about corporate ethics or “conscious capitalism.” Stross’s analogy allows us to see corporations as entities driven by singular goals, much like a paperclip maximizer—relentlessly pursuing profit without regard for broader consequences.

This framing suggests that expecting a corporation to act ethically may be akin to expecting a paperclip maximizer to stop short of turning the planet into paperclips. It’s a compelling way to think about the limits of corporate responsibility. And while governments and regulations are intended to act as safeguards, the speed and scale at which these entities operate often leave such mechanisms struggling to keep pace.

It’s a bleak but necessary realization: the systems we’ve built are fundamentally misaligned with human flourishing, and tweaking at the edges won’t change that.


As an educator and researcher, I’ve been immersed in the ongoing hype about how AI will revolutionize education. I’ve argued before that whether or not AI transforms the classroom itself, it will inevitably reshape the world in which classrooms operate.

In his essay, Charles Stross draws a parallel between electric vehicles (EVs) and the concept of the paperclip maximizer, suggesting that EVs function as “battery maximizers”—machines optimized primarily to serve the interests of battery manufacturers. This analogy underscores how technologies, when driven by singular objectives, can lead to unintended consequences.

So if we ask the question—what does AI want? We get the answer that it has an insatiable need for data. And that need, in the realm of education, will mean increased emphasis on data-driven educational practices—what we will euphemistically call “personalized learning.” And this is already happening. The need for data (and more data) to train its models has already begun to change how we think about and talk about learning.

I want to thank Charles Stross for helping me think further and deeper about these issues. His essay provides a lens through which to view the systems around us—and the lens is sharp, incisive, and unflinchingly honest.

That said, I find myself ending on a more pessimistic note. If these corporate entities are indeed runaway algorithms, then any meaningful rupture or resistance will not come from within the system.

It will emerge at the margins, in small niches where alternative ways of being can take root. That’s the best we can hope for – to create cracks along its edges where something new might grow.

A few randomly selected blog posts…

Tech Trends, Special Issue on TPACK

TechTrends is a leading journal for professionals in the educational communication and technology field and is the official publication of the Association for Educational Communications and Technology (AECT). The current issue has 5 articles devoted to the TPACK...

Children & anthropomorphic toys

Andrea Francis and I recently presented a paper at AERA titled "Differences in children's verbal responses and behavioral interactions with anthropomorphic toys." The abstract is as follows: Interactive toys for children are becoming more popular for both play and...

Summing up NTLS

Joel Colbert and I were asked to sum up the previous two days of work that was conducted during the NTLS meeting in Washington DC. We created a presentation (with some help from Joel's graduate student, Cesar Gonzalez. We took advantage of the fact that the 19th was...

Unleashing Creativity: ISTE interview

Unleashing Creativity: ISTE interview

A few months ago I was interviewed for an article in Empowered Learner, an ISTE member magazine. The final article, Unleashing every genius: Creative genius isn't rare – but the conditions that nurture it are is now online. You can access the entire issue of the...

Visual proofs

I just came across these lovely visual mathematical proofs. For instance consider the following sequence: 1/2 + 1/4 + 1/8 + 1/16 + ... = 1 and then see the following image on the blog!! How cool is that!!!! I had posted about something similar earlier (see visualizing...

Of Math and Ambigrams

Mathematicians love puzzles—they love to play with numbers and shapes but often their love can turn to words and other areas that, at least on the surface, have little to do with mathematics. One form of visual wordplay with some deep connections to mathematics, and...

Harvard Open Access update

An update to my previous posting regarding Harvard adopting a open access requirement to all it faculty. It seems that the proposal has been approved. See this news story on the Chronicle.com website. Stuart M. Shieber, a professor of computer science at Harvard who...

TPACK Newsletter, #44 March 2021

TPACK Newsletter, #44 March 2021

This is a delayed (by more than a year) posting of the TPACK newsletter #44 (link to PDF). My apologies, I am not entirely sure how I missed that. As always, Judi Harris and team have done a great job. You can find the previous issues archived here. This issue...

Analyzing political debate

Political debates are heavily analyzed - by pundits and laypeople alike. I had my own minor visual contribution to this discourse through this WordMap/Cloud of the third and final debate between McCain and Obama . Such wordmaps are fun to create and see but are not...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *