From Self-Driving Cars to Selfish Genes: Trapped in AI’s Metaphors, Literally

by | Monday, December 09, 2024

Tesla recently, unannounced gave me temporary access to its Full Self Driving system, and I decided to give it a whirl. It was somewhat unnerving to sit back and experience the car “do its thing.” But over time you get to understand how the car is behaving, where it does well and contexts where it does not. To be fair, I never felt that I was in any kind of danger at any time but there were moments where it would do things I would have differently and that would make me wonder as to how it was processing the information it was gathering and the decisions it was making. (I should add, that for legal reasons you do have to keep your hands on the steering wheel at all times, but apart from that the car is making all the moves, taking all the decisions.)

In attempting to describe this experience I found myself unable to escape the language of intention (it is all over the first paragraph as well, if you read carefully). As the car approached a left turn against oncoming traffic, it would stop, creep forward tentatively, and “look” before deciding whether to complete the turn. The anthropomorphizing was impossible to avoid – the car was being “cautious,” it was “checking” if it was safe, it was “waiting” for the right moment.

I see the same thing with my dog Omi. Watch any dog navigate the world and try to describe their behavior without attributing purpose, intention, or mental states. You can’t. When Omi pauses at the corner, cocks his head, and shifts his weight forward, he’s clearly “deciding” whether to cross. When he freezes mid-stride and fixates on a bush, he’s “suspecting” there might be a squirrel.

This tendency to see purpose and intention isn’t just a quirk of how we perceive AI systems or animal behavior. It runs deep in how we understand and describe complex systems that exhibit apparently purposeful behavior.

I am taking a slightly different angle in this post. I hope to explore how we describe complex systems that seem to demonstrate purpose – and how intentional language may be a useful shortcut but with its pernicious side as well. Perhaps nowhere is this more evident than in evolutionary biology, where scientists have long grappled with the challenge of describing natural selection without resorting to intentional language.

When Richard Dawkins wrote “The Selfish Gene,” he faced criticism for attributing purpose to genes – speaking of them as if they had desires and strategies. He knew genes don’t actually “want” anything, yet he argued that using such language was not just convenient but almost unavoidable. Without these metaphorical shortcuts, describing complex evolutionary processes becomes painfully convoluted.

We find ourselves in a similar position with artificial intelligence. We casually say that an AI model “thinks,” “believes,” or “wants” to do something. We describe language models as “hallucinating” or being “confused.” Just as with evolutionary biology, we know these anthropomorphic metaphors aren’t literally true – neural networks don’t actually “think” or “want” anything – yet we find ourselves reaching for these shortcuts constantly.

For instance, a recent story in Fast Company titled “Ultimate guide to ChatGPT, Gemini, Llama, and other genAI chatbots you need right now” is a good example of how metaphors are somewhat inescapable in this area. A quick read of the story reveals a range of metaphors used, some that are almost invisible to us now because they’ve become so normalized in discussions about AI. The text is filled with anthropomorphic language – AI systems that can “learn,” “understand,” and “recognize,” while competing in an “LLM race” and pushing capability “frontiers.” These systems are described as being “trained” and “fine-tuned,” capable of having “verbal conversations” and providing “responses” when they lack information.

This isn’t about whether AI systems are actually sentient or conscious – that’s a different debate. The interesting thing here is how our human sentience, our consciousness, shapes the language we use and the inferences we make. The metaphors we employ reveal more about our cognitive biases than about the systems themselves.

The parallel goes deeper. In all these cases – self-driving cars, animal behavior, evolution, and AI – we’re dealing with complex systems that produce results that appear purposeful and intelligent, yet emerge from underlying processes that are, in a sense, mechanical and purpose-free. Natural selection has no foresight or goals, yet produces creatures that seem exquisitely designed. Language models have no actual understanding or intentions, yet produce outputs that seem thoughtful and purposeful. My Tesla isn’t actually being “careful” – it’s executing algorithms based on sensor data.

These metaphorical shortcuts, while useful, can be dangerous. Taking the “selfish gene” metaphor too literally can lead to misunderstandings about evolution. Anthropomorphizing AI can lead us astray in understanding its capabilities and limitations. When we say a language model “knows” something or “wants” to help, we risk attributing to it capabilities and motivations it doesn’t possess.

Yet what’s the alternative? We’re social creatures, evolved to see minds and intentions everywhere we look. We can’t help but perceive agency and purpose, whether we’re watching a dog contemplate crossing a street, a self-driving car inch into an intersection, or a language model construct a response.

A large part of the problem is that the companies creating these technologies are deliberately leaning into such language. Furthermore, they are building it into the systems themselves, making them respond in emotive, agentic, language that pushes these metaphors even further into your consciousness – and making them invisible to scrutiny. Just yesterday, I was working with Claude, and when I mentioned Dawkins, it responded with an enthusiastic “Ah yes” – as if it had actually read and remembered his books, rather than just being trained on the text. Pure performance, yet that’s what we have been given to work with.

There’s a fascinating Catch 22 irony here – we’re using the black box of our own minds, with all its social instincts and tendency to see purpose everywhere, to try to understand other black boxes, whether they’re neural networks, animal behaviors, or evolutionary processes. And increasingly, these systems are being deliberately designed to trigger these social responses. ChatGPT isn’t accidentally conversational – it’s trained to engage our social instincts.

Just as the language of design and purpose in evolution has led both to misunderstandings (crude “survival of the fittest” interpretations) and to deliberate misuse (intelligent design arguments), we’re likely stuck with similar problems in AI. No amount of careful caveats or toggling between metaphorical and mechanical descriptions will change our fundamental tendency to see minds and purposes where there may be none.

We’re not going to think our way out of this one.

Perhaps the best we can do is acknowledge this trap we’re in – recognize that we’re using one set of anthropomorphic metaphors to understand another set of anthropomorphic metaphors, all while these systems are increasingly designed to trigger exactly these responses.

A few randomly selected blog posts…

Hello Taiwan

Arrived at Taipei airport and got through immigration and customs quite quickly. I was received at the airport by Waiway Lin, a doctoral student at the Graduate School of Curriculum and Instruction at the National Taipei University of Education. It appears that she...

Education in India & the role of the Azim Premji Foundation

Just before the Thanksgiving break, the College of Education and Michigan State University had the opportunity to host Dilleep Ranjekar and Anurag Behar, Co-CEO's of the Azim Premji Foundation.  The Azim Premji Foundation is a not-for-profit organization with a vision...

Episteme6 @ Mumbai: 2 presentations

Episteme6 @ Mumbai: 2 presentations

This past December I was at the epiSTEME 6 conference in Mumbai. It was jointly  organized by the Homi Bhaba Center for Science Education, TIFR and the Interdisciplinary Program in Educational Technology, IIT Bombay. I presented two papers there, oneabout...

Deep-Play: Creativity in Math & Art through Visual Wordplay

I have been creating ambigrams for years now... and I feel extremely lucky that what started as a personal interest and passion has led to some wonderful experiences and learning. These include a series of articles on the mathematics behind these visual designs and...

TPACK in EDTECHNICA

TPACK in EDTECHNICA

I have been a huge fan of EdTechBooks for a long time. Their philosophy of making quality textbooks freely accessible for all resonates with me deeply. It is no surprise that I was excited to hear of their latest initiative: that of creating a living encyclopedia of...

Technology integration, looking forward to the past

Tom Johnson's Adventures in Pencil Integration is the smartest, sassiest blog I have come across in a long time. This is how the sidebar describes the blog/author. The year is 1897 and Tom Johnson works for a small school district. This is the story of the journey to...

Center for American Progress Webinar: AI in Education

Center for American Progress Webinar: AI in Education

I had the pleasure, this morning, of participating in a panel discussion organized by the Center for American Progress, titled Leveraging Technology To Equip K-12 Students for Success. Although the title covered a broad view of technology, our focus was specifically...

Special CITE issue on TPACK

The CITE Journal had a recent special issue devoted to TPACK. You can access the special issue (edited by Judi Harris and Matt Koehler) here or individual articles below. Bull, G., & Bell, L. (2009). TPACK: A framework for the CITE Journal. Contemporary Issues in...

TPACK & Creativity at Twente

I just finished a marathon session of presentations and discussions with the master's students in Curriculum Development and Educational Innovation at Twente University. It was wonderful to meet with them and discuss creativity, teaching, design, TPACK, among other...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *