ChatGPT3 is bulls*** artist

by | Thursday, March 02, 2023

Back in 1986 the philosopher Harry G. Frankfurt wrote an essay titled “On bullshit” which he then expanded into a book, published in 2005. Essentially, the essay and the book are a rumination on the distinction between “bullshitters” and “liars.” He argues that:

Someone who lies and someone who tells the truth are playing on opposite sides, so to speak, in the same game. Each responds to the facts as he understands them, although the response of the one is guided by the authority of the truth, while the response of the other defies that authority and refuses to meet its demands. The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.

I was reminded of Frankfurt’s essay while listening to Ezra Klein’s podcast conversation with Gary Markus (in an episode titled: A skeptical take on the AI revolution) where he made the comparison between the output of large language models (of which ChatGPT3 is one) and Frankfurt’s idea of bullshit, arguing that the outputs of ChatGPT3 meet most of the criteria for bullshit as laid out by Frankfurt. This analogy resonated with me, and made me go back and revisit the original essay, which in turn led to some reflections on the role of large language models in education and beyond.

But first some quotes from Frankfurt. I think it may be useful to read them in the context of the emergence of ChatGPT3 and other large language models, and their penchant for making things up, of “hallucinating” as it were.

Quote 1: Bullshit is unavoidable whenever circumstance require someone to talk without knowing what he is talking about.

Quote 2: It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.

Quote 3: What is wrong with a counterfeit is not what it is like, but how it was made. This points to a similar and fundamental aspect of the essential nature of bullshit…

These quotes are an almost perfect description of what these large language models do. They manipulate words with no understanding of what they mean, with no correspondence with reality or truth. This is because ChatGPT3 (and other large language models) have no internal model of the world. Truth is not something they have been designed to care about. They float in a sea of words, unmoored from reality, unconcerned about meaning, reality and truth.

And though these large language models can be wrong (often hilariously wrong) they ARE surprisingly good at what they can do. As many people have commented large language models are particularly good when it comes to “formulaic” or “genre” writing. I mean all you have to do it ask it to write a song in the style of Eminem and it will spit one out right away. Moreover, it is good at genre-mashing, such as writing a poem about “losing your socks in the dryer in the style of the declaration of independence” (an example taken from Ezra Klein). This is similar to my attempts to get ChatGPT3 to write a mathematical proof in rhyming verse – not great but not bad either. Also, these models can also deal with counterfactuals and hypotheticals (what if Red Riding Hood never went to the forest, or what if the wolf were a cow). Clearly, there IS some higher-level abstraction happening here (at least at the stylistic and genre convention level) that allows this to happen. And it is often this slipperiness that makes it appear creative.

This ability to mimic genre conventions is important, because peddling bullshit is not easy. It requires knowledge of genre conventions, of understanding how to appropriately phrase statements to create a persuasive argument.

And ChatGPT3 is incredibly powerful in regurgitating this surface sophistication making it sound like a more than plausible interlocutor. And it will only get better at this mimicry with time and training. It will, however, never understand the meaning of what it spits out, since that has never been the goal of the project.

It is is this combination of its ability to mimic or mashup genre conventions combined with an indifference to truth that makes ChatGPT3 a genuine bullshit artist!

If we accept this, there are some interesting issues that emerge.

First, this means that in education, where we have some tried and tested genres (such as the 5 paragraph essay), ChatGPT3 can do a pretty good job of faking it. (As long, of course, as you don’t care about factual accuracy.) It is this that has had many educators in a tizzy. But that may not be as much a function of ChatGPT3 as it may be a limitation of how we have conventionally thought about assessing student learning.

Second, and this to me a bigger issue, the easy availability of these tools means that that the cost of creating bullshit is effectively zero. Which in turn means that there will now be a lot more misinformation out there in the world, some created by happenstance, but a lot also created deliberately to mislead and misinform. (Think about just how the amount of spam just blew up once the cost of sending email messages dropped effectively to zero.) And determining what is true and what is not will be even more difficult, than it is today.

That is what we get by letting a super powerful bullshit artist into the world.


Postscript:

A couple of things came to my attention after I had written this post that I thought would be valuable to include here.

First, it appears that the connection of Frankfurt’s work on Bullshit and ChatGPT3 is in the zeitgeist. For instance here is a quote from a recent article in New York, titled: You are not a parrot. And a chatbot is not a human (thanks to Kottke.org for the link).

This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter, as philosopher Harry Frankfurt, author of On Bullshit, defined the term. Bullshitters, Frankfurt argued, are worse than liars. They don’t care whether something is true or false. They care only about rhetorical power — if a listener or reader is persuaded.

Second, the issue of what happens when the cost of creating bullshit drops (almost) to zero is captured by this story in NPR about how a science fiction magazine has had to cut of submissions after a flood of AI-generated stories.

Finally, this just came my way. A Washington Post story about how scammers are using AI voice generation to cheat people of their money: They thought loved ones were calling for help. It was an AI scam. (The subtitle says it all: Scammers are using artificial intelligence to sound more like family members in distress. People are falling for it and losing thousands of dollars),

A few randomly selected blog posts…

SITE 2008, Google & Creativity

At SITE 2008 Mike DeSchryver and I presented a paper titled Pre-Service teachers and the web: Does access to the Web enhance creative thinking about teaching. Abstract: This study examined teacher creativity and its relationship with emerging technologies. Eight...

Off to India

I am heading off to India tomorrow and will be gone for approximately two weeks. The main reason for this trip is to attend the International Conference on Indian Education: The Positive Turmoil in New Delhi. I am scheduled to present and act as a resource person for...

Creating Palindrograms, aka palindromic ambigrams

Ambigram.com is a website about ambigrams and the people who make them. Lots of cool stuff for enthusiasts and novices alike. They often conduct competitions and other fun challenges for readers. One recent one was related to palindromes. In brief, they challenged...

Design book-review podcasts

Design book-review podcasts

I am teaching a new masters/doctoral seminar titled Design in the real world. This is the first class I am teaching here after coming to ASU and it is exciting to back in with students engaged in discussions about design, technology, and its role in our...

The end of practical obscurity

There is a somewhat troubling story in NYTimes a couple of days ago: (If You Run a Red Light, Will Everyone Know?) about CriminalSearches.com, a free service that lets people search by name through criminal archives of all 50 states and 3,500 counties in the United...

TPACK & Games @ Drexel

I am headed to Drexel University to give a talk at the Drexel Learning Games Network seminar series. The DLGN is the brainchild of  Aroutis Foster, former graduate student, now rising star academic and researcher. As the DLGN website says The Drexel Learning Games...

Beavers, Brains & Chat Bots: Cognitive Illusions in the Age of AI

Beavers, Brains & Chat Bots: Cognitive Illusions in the Age of AI

Imagine a world where tape recorders fool beavers, triangles tell stories, and AI convinces us it's sentient. Welcome to reality—where our cognitive biases are colliding with technology in ways we're only beginning to understand. In this post, I focus on our tendency...

Funny TPACK mashups, the Aussie way

TPACK is huge in Australia (for instance see this note TPACK underpins Aussie Teacher Ed Restructuring). I am hopeful that one of these days this interest will translate into a trip down-under... It would be great to travel around the continent, giving talks, meeting...

TPACK at Classroom 2.0

There is an ongoing discussion at Classroom 2.0 on TPACK. You can join the conversation here.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *