ChatGPT3 is bulls*** artist

by | Thursday, March 02, 2023

Back in 1986 the philosopher Harry G. Frankfurt wrote an essay titled “On bullshit” which he then expanded into a book, published in 2005. Essentially, the essay and the book are a rumination on the distinction between “bullshitters” and “liars.” He argues that:

Someone who lies and someone who tells the truth are playing on opposite sides, so to speak, in the same game. Each responds to the facts as he understands them, although the response of the one is guided by the authority of the truth, while the response of the other defies that authority and refuses to meet its demands. The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.

I was reminded of Frankfurt’s essay while listening to Ezra Klein’s podcast conversation with Gary Markus (in an episode titled: A skeptical take on the AI revolution) where he made the comparison between the output of large language models (of which ChatGPT3 is one) and Frankfurt’s idea of bullshit, arguing that the outputs of ChatGPT3 meet most of the criteria for bullshit as laid out by Frankfurt. This analogy resonated with me, and made me go back and revisit the original essay, which in turn led to some reflections on the role of large language models in education and beyond.

But first some quotes from Frankfurt. I think it may be useful to read them in the context of the emergence of ChatGPT3 and other large language models, and their penchant for making things up, of “hallucinating” as it were.

Quote 1: Bullshit is unavoidable whenever circumstance require someone to talk without knowing what he is talking about.

Quote 2: It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.

Quote 3: What is wrong with a counterfeit is not what it is like, but how it was made. This points to a similar and fundamental aspect of the essential nature of bullshit…

These quotes are an almost perfect description of what these large language models do. They manipulate words with no understanding of what they mean, with no correspondence with reality or truth. This is because ChatGPT3 (and other large language models) have no internal model of the world. Truth is not something they have been designed to care about. They float in a sea of words, unmoored from reality, unconcerned about meaning, reality and truth.

And though these large language models can be wrong (often hilariously wrong) they ARE surprisingly good at what they can do. As many people have commented large language models are particularly good when it comes to “formulaic” or “genre” writing. I mean all you have to do it ask it to write a song in the style of Eminem and it will spit one out right away. Moreover, it is good at genre-mashing, such as writing a poem about “losing your socks in the dryer in the style of the declaration of independence” (an example taken from Ezra Klein). This is similar to my attempts to get ChatGPT3 to write a mathematical proof in rhyming verse – not great but not bad either. Also, these models can also deal with counterfactuals and hypotheticals (what if Red Riding Hood never went to the forest, or what if the wolf were a cow). Clearly, there IS some higher-level abstraction happening here (at least at the stylistic and genre convention level) that allows this to happen. And it is often this slipperiness that makes it appear creative.

This ability to mimic genre conventions is important, because peddling bullshit is not easy. It requires knowledge of genre conventions, of understanding how to appropriately phrase statements to create a persuasive argument.

And ChatGPT3 is incredibly powerful in regurgitating this surface sophistication making it sound like a more than plausible interlocutor. And it will only get better at this mimicry with time and training. It will, however, never understand the meaning of what it spits out, since that has never been the goal of the project.

It is is this combination of its ability to mimic or mashup genre conventions combined with an indifference to truth that makes ChatGPT3 a genuine bullshit artist!

If we accept this, there are some interesting issues that emerge.

First, this means that in education, where we have some tried and tested genres (such as the 5 paragraph essay), ChatGPT3 can do a pretty good job of faking it. (As long, of course, as you don’t care about factual accuracy.) It is this that has had many educators in a tizzy. But that may not be as much a function of ChatGPT3 as it may be a limitation of how we have conventionally thought about assessing student learning.

Second, and this to me a bigger issue, the easy availability of these tools means that that the cost of creating bullshit is effectively zero. Which in turn means that there will now be a lot more misinformation out there in the world, some created by happenstance, but a lot also created deliberately to mislead and misinform. (Think about just how the amount of spam just blew up once the cost of sending email messages dropped effectively to zero.) And determining what is true and what is not will be even more difficult, than it is today.

That is what we get by letting a super powerful bullshit artist into the world.


Postscript:

A couple of things came to my attention after I had written this post that I thought would be valuable to include here.

First, it appears that the connection of Frankfurt’s work on Bullshit and ChatGPT3 is in the zeitgeist. For instance here is a quote from a recent article in New York, titled: You are not a parrot. And a chatbot is not a human (thanks to Kottke.org for the link).

This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter, as philosopher Harry Frankfurt, author of On Bullshit, defined the term. Bullshitters, Frankfurt argued, are worse than liars. They don’t care whether something is true or false. They care only about rhetorical power — if a listener or reader is persuaded.

Second, the issue of what happens when the cost of creating bullshit drops (almost) to zero is captured by this story in NPR about how a science fiction magazine has had to cut of submissions after a flood of AI-generated stories.

Finally, this just came my way. A Washington Post story about how scammers are using AI voice generation to cheat people of their money: They thought loved ones were calling for help. It was an AI scam. (The subtitle says it all: Scammers are using artificial intelligence to sound more like family members in distress. People are falling for it and losing thousands of dollars),

A few randomly selected blog posts…

Pi(e) day, 2019

Pi(e) day, 2019

A design created in celebration of Pi-day, 2019. (More context about the day here and more about the number itself here). As always, the OofSI team celebrates Pi(e) day by offering a selection of Pi(e)'s - exactly at 1:59 PM. Totally irrational I know! Apart from...

Postdictable, the commercials

I had written earlier about the idea of "postdictable" which was defined as something that is "surprising initially, but then understandable with a bit of thought." It lies at the spot between predictability and total chaos. The movie Sixth Sense is postdictable in...

New Computational Thinking article published

Voogt, J., Fisser., P. Good, J., Mishra, P., & Yadav, A. (2015). Computational thinking in compulsory education: Towards an agenda for research and practice. Education & Information Technologies. Available at...

TPACK Newsletter, Issue #13, December 2012

TPACK Newsletter, Issue 13, December 2012 Welcome to the thirteenth edition of the (approximately quarterly) TPACK Newsletter! TPACK work is continuing worldwide, and is appearing in an increasing diversity of publication, conference, and professional development...

The political psychology of images

Browsing through Nikita Prokhorov's website (see this posting about Nikita's new blog about the process of creating ambigrams) led me to a fascinating article about how symbols and the historical weight they can carry. I think a similar issue comes up regarding the...

Twittering a tale

My favorite short short story is by Hemmingway. It is all of six words long - but boy, does it pack a punch. It goes, "For sale, baby shoes, never used." Wow! It turns out that such short stories are not merely a novelty. The advent of Twitter and microblogging, with...

Acts of Translation

I recently finished reading three books: A case of Two Cities by Qiu Xialong, A Wild Sheep Chase by Haruki Murakami, and Heavenly Date and Other Flirtations by Alexander McCall Smith. These are three very different books. The first two are novels and the third is a...

Hype & Luck: Gratuitous Self-Promotion (2024 Edition)

Hype & Luck: Gratuitous Self-Promotion (2024 Edition)

It is natural, if you have been working in a field for a while, and have been somewhat successful, that some accolades will come your way, just by dint of being around long enough. As Bing Chat wrote, when asked to create a funny, self-deprecating profile of me in the...

Rethinking Google Ranking

Matt Koehler suggested that my reasoning in a previous post (Google ranking, a self-defeating approach) criticizing his attempt at raising his Google ranking was mistaken. According to him, providing links to other Koehlers in the world actually helps raise his...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *