ChatGPT3 is bulls*** artist

by | Thursday, March 02, 2023

Back in 1986 the philosopher Harry G. Frankfurt wrote an essay titled “On bullshit” which he then expanded into a book, published in 2005. Essentially, the essay and the book are a rumination on the distinction between “bullshitters” and “liars.” He argues that:

Someone who lies and someone who tells the truth are playing on opposite sides, so to speak, in the same game. Each responds to the facts as he understands them, although the response of the one is guided by the authority of the truth, while the response of the other defies that authority and refuses to meet its demands. The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.

I was reminded of Frankfurt’s essay while listening to Ezra Klein’s podcast conversation with Gary Markus (in an episode titled: A skeptical take on the AI revolution) where he made the comparison between the output of large language models (of which ChatGPT3 is one) and Frankfurt’s idea of bullshit, arguing that the outputs of ChatGPT3 meet most of the criteria for bullshit as laid out by Frankfurt. This analogy resonated with me, and made me go back and revisit the original essay, which in turn led to some reflections on the role of large language models in education and beyond.

But first some quotes from Frankfurt. I think it may be useful to read them in the context of the emergence of ChatGPT3 and other large language models, and their penchant for making things up, of “hallucinating” as it were.

Quote 1: Bullshit is unavoidable whenever circumstance require someone to talk without knowing what he is talking about.

Quote 2: It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.

Quote 3: What is wrong with a counterfeit is not what it is like, but how it was made. This points to a similar and fundamental aspect of the essential nature of bullshit…

These quotes are an almost perfect description of what these large language models do. They manipulate words with no understanding of what they mean, with no correspondence with reality or truth. This is because ChatGPT3 (and other large language models) have no internal model of the world. Truth is not something they have been designed to care about. They float in a sea of words, unmoored from reality, unconcerned about meaning, reality and truth.

And though these large language models can be wrong (often hilariously wrong) they ARE surprisingly good at what they can do. As many people have commented large language models are particularly good when it comes to “formulaic” or “genre” writing. I mean all you have to do it ask it to write a song in the style of Eminem and it will spit one out right away. Moreover, it is good at genre-mashing, such as writing a poem about “losing your socks in the dryer in the style of the declaration of independence” (an example taken from Ezra Klein). This is similar to my attempts to get ChatGPT3 to write a mathematical proof in rhyming verse – not great but not bad either. Also, these models can also deal with counterfactuals and hypotheticals (what if Red Riding Hood never went to the forest, or what if the wolf were a cow). Clearly, there IS some higher-level abstraction happening here (at least at the stylistic and genre convention level) that allows this to happen. And it is often this slipperiness that makes it appear creative.

This ability to mimic genre conventions is important, because peddling bullshit is not easy. It requires knowledge of genre conventions, of understanding how to appropriately phrase statements to create a persuasive argument.

And ChatGPT3 is incredibly powerful in regurgitating this surface sophistication making it sound like a more than plausible interlocutor. And it will only get better at this mimicry with time and training. It will, however, never understand the meaning of what it spits out, since that has never been the goal of the project.

It is is this combination of its ability to mimic or mashup genre conventions combined with an indifference to truth that makes ChatGPT3 a genuine bullshit artist!

If we accept this, there are some interesting issues that emerge.

First, this means that in education, where we have some tried and tested genres (such as the 5 paragraph essay), ChatGPT3 can do a pretty good job of faking it. (As long, of course, as you don’t care about factual accuracy.) It is this that has had many educators in a tizzy. But that may not be as much a function of ChatGPT3 as it may be a limitation of how we have conventionally thought about assessing student learning.

Second, and this to me a bigger issue, the easy availability of these tools means that that the cost of creating bullshit is effectively zero. Which in turn means that there will now be a lot more misinformation out there in the world, some created by happenstance, but a lot also created deliberately to mislead and misinform. (Think about just how the amount of spam just blew up once the cost of sending email messages dropped effectively to zero.) And determining what is true and what is not will be even more difficult, than it is today.

That is what we get by letting a super powerful bullshit artist into the world.


Postscript:

A couple of things came to my attention after I had written this post that I thought would be valuable to include here.

First, it appears that the connection of Frankfurt’s work on Bullshit and ChatGPT3 is in the zeitgeist. For instance here is a quote from a recent article in New York, titled: You are not a parrot. And a chatbot is not a human (thanks to Kottke.org for the link).

This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter, as philosopher Harry Frankfurt, author of On Bullshit, defined the term. Bullshitters, Frankfurt argued, are worse than liars. They don’t care whether something is true or false. They care only about rhetorical power — if a listener or reader is persuaded.

Second, the issue of what happens when the cost of creating bullshit drops (almost) to zero is captured by this story in NPR about how a science fiction magazine has had to cut of submissions after a flood of AI-generated stories.

Finally, this just came my way. A Washington Post story about how scammers are using AI voice generation to cheat people of their money: They thought loved ones were calling for help. It was an AI scam. (The subtitle says it all: Scammers are using artificial intelligence to sound more like family members in distress. People are falling for it and losing thousands of dollars),

Topics related to this post: Podcast

A few randomly selected blog posts…

Stop motion fun

My daughter, Shreya, had some friends over yesterday and they created a short stop-motion animation film with the new setup in our basement. Enjoy [youtube]http://www.youtube.com/watch?v=TTkhuEfTAnk[/youtube] More videos made with my kids can be seen by clicking...

Good-Evil Ambigram in Pub Med!

Good-Evil Ambigram in Pub Med!

My Good-Evil oscillation ambigram design is easily one of my most popular designs - having made it to multiple publications, websites, covers of magazines, on the TV Show Brain Games... and now it has made its way into a medical research journal Frontiers of...

Exploring visual space with mathematics

Stacy Clause just sent me this very cool link to an article titled Exploring logo designs with Mathematica. In this article, Chris Carlson, of the User Design Group at Mathematia shows how one can mathematically develop variations on commercial logo designs by the...

Optical illusions go live…

If you love optical illusions you have to see this... just absolutely brilliant. The moment she pulls out the driver's license is priceless. And of course the face / vase flip-flop at the end is cool too. See more funny videos and funny pictures at CollegeHumor. This...

Heading to India

I leave for India tomorrow to participate in a Symposium on Education Technology in Schools: Converging for Innovation & Creativity being held in Bangalore from the 20th to the 22nd of August. The meeting is organized by the Quest Alliance, USAID and International...

Celebrating Euler’s birthday

Google has a new doodle out today (the 15th of April) to celebrate the 306th birth anniversary of Leonhard Euler, the Swiss mathematician and physicist. This prompted some reflection on his work (and some mathematical poetry)... At the bottom right of the doodle above...

The role of Vitamin D in beta-cell function

Who says scientists can't have fun. I just discovered a series of videos on (where else) YouTube about scientists expressing their doctoral research through dance!!! What can be cooler than that? Check out one of the winners: The role of Vitamin D in beta-cell...

Jugaad, India-genous creativity

Jugaad is a Hindi word which does not have a straight forward equivalent in English. I guess the closest phrase I would say would be "situational or indigenous creativity," the ability to make do creatively with the tools/resources one has at hand. On Jugadu.com I...

It’s only a game…

... but what if real people die? Excellent article by William Saletan on Slate about a new breed of war-toys that blur the line between video games and real war. As the article says, "if looks and feels like a video game. But it kills real people." As it turns out,...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *