ChatGPT3 is bulls*** artist

by | Thursday, March 02, 2023

Back in 1986 the philosopher Harry G. Frankfurt wrote an essay titled “On bullshit” which he then expanded into a book, published in 2005. Essentially, the essay and the book are a rumination on the distinction between “bullshitters” and “liars.” He argues that:

Someone who lies and someone who tells the truth are playing on opposite sides, so to speak, in the same game. Each responds to the facts as he understands them, although the response of the one is guided by the authority of the truth, while the response of the other defies that authority and refuses to meet its demands. The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.

I was reminded of Frankfurt’s essay while listening to Ezra Klein’s podcast conversation with Gary Markus (in an episode titled: A skeptical take on the AI revolution) where he made the comparison between the output of large language models (of which ChatGPT3 is one) and Frankfurt’s idea of bullshit, arguing that the outputs of ChatGPT3 meet most of the criteria for bullshit as laid out by Frankfurt. This analogy resonated with me, and made me go back and revisit the original essay, which in turn led to some reflections on the role of large language models in education and beyond.

But first some quotes from Frankfurt. I think it may be useful to read them in the context of the emergence of ChatGPT3 and other large language models, and their penchant for making things up, of “hallucinating” as it were.

Quote 1: Bullshit is unavoidable whenever circumstance require someone to talk without knowing what he is talking about.

Quote 2: It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction.

Quote 3: What is wrong with a counterfeit is not what it is like, but how it was made. This points to a similar and fundamental aspect of the essential nature of bullshit…

These quotes are an almost perfect description of what these large language models do. They manipulate words with no understanding of what they mean, with no correspondence with reality or truth. This is because ChatGPT3 (and other large language models) have no internal model of the world. Truth is not something they have been designed to care about. They float in a sea of words, unmoored from reality, unconcerned about meaning, reality and truth.

And though these large language models can be wrong (often hilariously wrong) they ARE surprisingly good at what they can do. As many people have commented large language models are particularly good when it comes to “formulaic” or “genre” writing. I mean all you have to do it ask it to write a song in the style of Eminem and it will spit one out right away. Moreover, it is good at genre-mashing, such as writing a poem about “losing your socks in the dryer in the style of the declaration of independence” (an example taken from Ezra Klein). This is similar to my attempts to get ChatGPT3 to write a mathematical proof in rhyming verse – not great but not bad either. Also, these models can also deal with counterfactuals and hypotheticals (what if Red Riding Hood never went to the forest, or what if the wolf were a cow). Clearly, there IS some higher-level abstraction happening here (at least at the stylistic and genre convention level) that allows this to happen. And it is often this slipperiness that makes it appear creative.

This ability to mimic genre conventions is important, because peddling bullshit is not easy. It requires knowledge of genre conventions, of understanding how to appropriately phrase statements to create a persuasive argument.

And ChatGPT3 is incredibly powerful in regurgitating this surface sophistication making it sound like a more than plausible interlocutor. And it will only get better at this mimicry with time and training. It will, however, never understand the meaning of what it spits out, since that has never been the goal of the project.

It is is this combination of its ability to mimic or mashup genre conventions combined with an indifference to truth that makes ChatGPT3 a genuine bullshit artist!

If we accept this, there are some interesting issues that emerge.

First, this means that in education, where we have some tried and tested genres (such as the 5 paragraph essay), ChatGPT3 can do a pretty good job of faking it. (As long, of course, as you don’t care about factual accuracy.) It is this that has had many educators in a tizzy. But that may not be as much a function of ChatGPT3 as it may be a limitation of how we have conventionally thought about assessing student learning.

Second, and this to me a bigger issue, the easy availability of these tools means that that the cost of creating bullshit is effectively zero. Which in turn means that there will now be a lot more misinformation out there in the world, some created by happenstance, but a lot also created deliberately to mislead and misinform. (Think about just how the amount of spam just blew up once the cost of sending email messages dropped effectively to zero.) And determining what is true and what is not will be even more difficult, than it is today.

That is what we get by letting a super powerful bullshit artist into the world.


Postscript:

A couple of things came to my attention after I had written this post that I thought would be valuable to include here.

First, it appears that the connection of Frankfurt’s work on Bullshit and ChatGPT3 is in the zeitgeist. For instance here is a quote from a recent article in New York, titled: You are not a parrot. And a chatbot is not a human (thanks to Kottke.org for the link).

This makes LLMs beguiling, amoral, and the Platonic ideal of the bullshitter, as philosopher Harry Frankfurt, author of On Bullshit, defined the term. Bullshitters, Frankfurt argued, are worse than liars. They don’t care whether something is true or false. They care only about rhetorical power — if a listener or reader is persuaded.

Second, the issue of what happens when the cost of creating bullshit drops (almost) to zero is captured by this story in NPR about how a science fiction magazine has had to cut of submissions after a flood of AI-generated stories.

Finally, this just came my way. A Washington Post story about how scammers are using AI voice generation to cheat people of their money: They thought loved ones were calling for help. It was an AI scam. (The subtitle says it all: Scammers are using artificial intelligence to sound more like family members in distress. People are falling for it and losing thousands of dollars),

A few randomly selected blog posts…

Wislawa Szymborska, 1923 – 2012

Polish poet Wislawa Szymborska passed away a couple of days ago. I first heard of her on an NPR show a few years back (and had included a couple of her poems on the blog - see here and here). If you have never read her work, I entreat you to do so. She is an...

Digital footprint

My colleague Leigh Wolf shared with me an assignment completed by one of her students (Allison Keller) in a technology and leadership class she is currently teaching. How one person's use of technology has changed over time. [Hosted on Flickr] Click on the image to...

Spring break 2008

Our first family vacation in over three years! New Jersey to visit relatives, Delaware to visit friends, and New York city for the big city excitement! Hectic but great fun. I took over 500 photographs, got back home and deleted around 200 of them - the remaining are...

TPACK newsletter #31,

TPACK newsletter #31,

The latest version of the TPACK newsletter (#31) can be found here December 2016 (pdf). All previous issues are archived here. A shout-out to Judi Harris for all the work that goes into this. As I had said in a previous post, based on Judi's...

Video on MSU/Azim Premji University collaboration

Over the past year I have been involved in an exciting new initiative - a partnership between the College of Education at Michigan State University and the newly set up Azim Premji University in Bangalore, India. (A previous post about our ongoing work can be found...

Celebrating Euler’s birthday

Google has a new doodle out today (the 15th of April) to celebrate the 306th birth anniversary of Leonhard Euler, the Swiss mathematician and physicist. This prompted some reflection on his work (and some mathematical poetry)... At the bottom right of the doodle above...

TE150 wins MSU-AT&T Award

Matt Koehler and I just arrived in New York, 3 hours late, checked into our hotel, paid 14.95 for internet - and guess what it was all worth it. One of the first emails I had received informed us that we had won the 2008 MSU-AT&T Instructional Technology Awards...

Technology Surveys for K12 students

Photo iPad Dream #2 by Lance Shields from Flickr I received an email from one Holly Marich, a doctoral student in our hybrid-PhD program, asking if I knew about any  technology usage surveys her school district can give their K-12 students. I didn't know of one so I...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *