ChatGPT3 is bulls*** artist

by | Thursday, March 02, 2023

Back in 1986 the philosopher Harry G. Frankfurt wrote an essay titled “On bullshit” which he then expanded into a book, published in 2005. Essentially, the essay and the book are a rumination on the distinction between “bullshitters” and “liars.” He argues that:

Someone who lies and someone who tells the truth are playing on opposite sides, so to speak, in the same game. Each responds to the facts as he understands them, although the response of the one is guided by the authority of the truth, while the response of the other defies that authority and refuses to meet its demands. The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.

I was reminded of Frankfurt’s essay while listening to Ezra Klein’s podcast conversation with Gary Markus (in an episode titled: A skeptical take on the AI revolution) where he made the comparison between the output of large language models (of which ChatGPT3 is one) and Frankfurt’s idea of bullshit, arguing that the outputs of ChatGPT3 meet most of the criteria for bullshit as laid out by Frankfurt. This analogy resonated with me, and made me go back and revisit the original essay, which in turn led to some reflections on the role of large language models in education and beyond.

But first some quotes from Frankfurt. I think it may be useful to read them in the context of the emergence of ChatGPT3 and other large language models, and their penchant for making things up, of “hallucinating” as it were.

Quote 1: Bullshit is unavoidable whenever circumstance require someone to talk without knowing what he is talking about.

Quote 2: It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.

Quote 3: What is wrong with a counterfeit is not what it is like, but how it was made. This points to a similar and fundamental aspect of the essential nature of bullshit…

These quotes are an almost perfect description of what these large language models do. They manipulate words with no understanding of what they mean, with no correspondence with reality or truth. This is because ChatGPT3 (and other large language models) have no internal model of the world. Truth is not something they have been designed to care about. They float in a sea of words, unmoored from reality, unconcerned about meaning, reality and truth.

And though these large language models can be wrong (often hilariously wrong) they ARE surprisingly good at what they can do. As many people have commented large language models are particularly good when it comes to “formulaic” or “genre” writing. I mean all you have to do it ask it to write a song in the style of Eminem and it will spit one out right away. Moreover, it is good at genre-mashing, such as writing a poem about “losing your socks in the dryer in the style of the declaration of independence” (an example taken from Ezra Klein). This is similar to my attempts to get ChatGPT3 to write a mathematical proof in rhyming verse – not great but not bad either. Also, these models can also deal with counterfactuals and hypotheticals (what if Red Riding Hood never went to the forest, or what if the wolf were a cow). Clearly, there IS some higher-level abstraction happening here (at least at the stylistic and genre convention level) that allows this to happen. And it is often this slipperiness that makes it appear creative.

This ability to mimic genre conventions is important, because peddling bullshit is not easy. It requires knowledge of genre conventions, of understanding how to appropriately phrase statements to create a persuasive argument.

And ChatGPT3 is incredibly powerful in regurgitating this surface sophistication making it sound like a more than plausible interlocutor. And it will only get better at this mimicry with time and training. It will, however, never understand the meaning of what it spits out, since that has never been the goal of the project.

It is is this combination of its ability to mimic or mashup genre conventions combined with an indifference to truth that makes ChatGPT3 a genuine bullshit artist!

If we accept this, there are some interesting issues that emerge.

First, this means that in education, where we have some tried and tested genres (such as the 5 paragraph essay), ChatGPT3 can do a pretty good job of faking it. (As long, of course, as you don’t care about factual accuracy.) It is this that has had many educators in a tizzy. But that may not be as much a function of ChatGPT3 as it may be a limitation of how we have conventionally thought about assessing student learning.

Second, and this to me a bigger issue, the easy availability of these tools means that that the cost of creating bullshit is effectively zero. Which in turn means that there will now be a lot more misinformation out there in the world, some created by happenstance, but a lot also created deliberately to mislead and misinform. (Think about just how the amount of spam just blew up once the cost of sending email messages dropped effectively to zero.) And determining what is true and what is not will be even more difficult, than it is today.

That is what we get by letting a super powerful bullshit artist into the world.


Postscript:

A couple of things came to my attention after I had written this post that I thought would be valuable to include here.

First, it appears that the connection of Frankfurt’s work on Bullshit and ChatGPT3 is in the zeitgeist. For instance here is a quote from a recent article in New York, titled: You are not a parrot. And a chatbot is not a human (thanks to Kottke.org for the link).

This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter, as philosopher Harry Frankfurt, author of On Bullshit, defined the term. Bullshitters, Frankfurt argued, are worse than liars. They don’t care whether something is true or false. They care only about rhetorical power — if a listener or reader is persuaded.

Second, the issue of what happens when the cost of creating bullshit drops (almost) to zero is captured by this story in NPR about how a science fiction magazine has had to cut of submissions after a flood of AI-generated stories.

Finally, this just came my way. A Washington Post story about how scammers are using AI voice generation to cheat people of their money: They thought loved ones were calling for help. It was an AI scam. (The subtitle says it all: Scammers are using artificial intelligence to sound more like family members in distress. People are falling for it and losing thousands of dollars),

A few randomly selected blog posts…

Introducing India…

I had been invited to the Second Annual Internationalizing Michigan Education Conference: Building Bridges from Michigan to the World to speak about India. The title of my presentation was Learning about India, the world’s largest democracy. I was assisted in this by...

Solving the rubik cube, blindfolded

A YouTube video of Soham solving the rubic cube blindfolded! [youtube width="425" height="355"]http://www.youtube.com/watch?v=ymi-iG8uhR4[/youtube] [Thanks for Michael Gondry for the idea.]

TPACK moving in international circles

My friend, Martin Oliver, over at the London Knowledge Lab sent me the following link about a TPACK related publication that appeared in the International Journal of Education and Development Using Information and Communication Technology, aka IJEDUICT. (Boy, that's a...

Algorithms, Imagination & Creativity

Algorithms, Imagination & Creativity

Is music a craftOr is it an art?Does it come from mere trainingor spring form the heart?Did the études of Chopinreveal his soul's mood?Or was Frédéric ChopinJust some slick "pattern dude"?~ Douglas Hofstadter Ed Finn is the founding director of the Center for Science...

Creativity in Teaching & Learning @ Mizzou

Creativity in Teaching & Learning @ Mizzou

I was recently invited to conduct a workshop for the Celebration of Teaching Conference at the University of Missouri around Creativity in Teaching and Learning. This was my first time at Columbia, MO and the conference organizers were wonderful. I did two versions of...

The greatness of teachers

I discovered Hulu TV a few weeks ago and have been using it to catch up on previous episodes of The Daily Show. I decided today, as I was working on a presentation to watch Crawford. It is a documentary about "a small town thrust into big politics when George W. Bush...

The brilliantly twisted mind of PES

I discovered PES a couple of years ago when searching for examples of stop motion animation on the web. One glimpse of his work and I was smitten. Combine a prefect sense of timing and shot composition with a whimsical and surrealistic point of view and you get some...

Deep-Play & the Engaged Scholar

Deep-Play & the Engaged Scholar

The Engaged Scholar is a magazine published by MSU's Office of University Outreach and Engagement with the goal of celebrating "Michigan State University's ongoing partnership with Michigan, our nation, and our world." I just got the 10th anniversary issue in the...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *