ChatGPT as a blurry jpeg of the web

by | Sunday, February 12, 2023

Ted Chiang is one of the greatest, insightful writers working today. I had written previously about one his short stories in a post titled: Truth of fact and feeling: Unpacking McLuhan (2/3) about his short story The truth of fact and the truth of feeling. (If you haven’t read it – please do so).

In a recent piece in the New Yorker, (ChatGPT is a blurry jpeg of the web) Ted Chiang focuses on ChatGPT, and offers one of the best descriptions of the technology, making an analogy with lossy compressions systems (such as the Jpeg file format that we usually use to compress images). The essay is worth reading in full, but here are some key quotes:

… Think of ChatGPT as a blurry jpeg of all the text on the Web. It retains much of the information on the Web, in the same way that a jpeg retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry jpeg, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

This analogy makes even more sense when we remember that a common technique used by lossy compression algorithms is interpolation—that is, estimating what’s missing by looking at what’s on either side of the gap. When an image program is displaying a photo and has to reconstruct a pixel that was lost during the compression process, it looks at the nearby pixels and calculates the average. This is what ChatGPT does when it’s prompted to describe, say, losing a sock in the dryer using the style of the Declaration of Independence: it is taking two points in “lexical space” and generating the text that would occupy the location between them. (“When in the Course of human events, it becomes necessary for one to separate his garments from their mates, in order to maintain the cleanliness and order thereof. . . .”) ChatGPT is so good at this form of interpolation that people find it entertaining: they’ve discovered a “blur” tool for paragraphs instead of photos, and are having a blast playing with it.

Given GPT-3’s failure at a subject taught in elementary school, how can we explain the fact that it sometimes appears to perform well at writing college-level essays? Even though large language models often hallucinate, when they’re lucid they sound like they actually understand subjects like economic theory. Perhaps arithmetic is a special case, one for which large language models are poorly suited. Is it possible that, in areas outside addition and subtraction, statistical regularities in text actually do correspond to genuine knowledge of the real world?

I think there’s a simpler explanation. Imagine what it would look like if ChatGPT were a lossless algorithm. If that were the case, it would always answer questions by providing a verbatim quote from a relevant Web page. We would probably regard the software as only a slight improvement over a conventional search engine, and be less impressed by it. The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something. When we’re dealing with sequences of words, lossy compression looks smarter than lossless compression.

Do read the actual piece… it is worth your time: ChatGPT is a blurry jpeg of the web

A few randomly selected blog posts…

TPACK & Philosophy

TPACK & Philosophy

I often receive emails about the TPACK framework and even though I have not worked in that space for a while, I do feel obligated to respond. That said, I usually do not feel the need to document my responses. Once in a while, however, I get a question that demands a...

Computer Fiction: Two new ambigrams

For one reason or another I have not been bitten by the ambigram bug for a while - till suddenly a week or two ago, two new ambigrams popped into my head. A bit of work with Freehand later... here they are. Enjoy Computer Fiction

Understanding student engagement

I had posted recently about a Gallup poll on student engagement. Essentially the poll showed that student engagement dropped precipitously (though as I wrote, not as starkly as their graph indicated) as students moved from elementary to high school. My friend, Gaurav...

Value in an age of free…

What happens when an economy "built on selling precious copies" suddenly confronts the world of the Internet - a world based on the "free flow of free copies?" Kevin Kelly confronts this issue in a recent post titled, Better than free. As he says, "how does one make...

TPACK Newsletter, Issue #11, October 2011

TPACK Newsletter, Issue #11:October 2011 Welcome to the eleventh edition of the (approximately quarterly) TPACK Newsletter! TPACK work is continuing worldwide, and is appearing in an increasing diversity of publication, conference, and professional development venues....

Dirkin, Mishra & Altermatt (2005)

Dirkin, H. K., Mishra, P., & Altermatt, E. (2005). All or nothing: Levels of sociability of a pedagogical software agent and its impact on student perceptions and learning. Journal Educational Multimedia and Hypermedia. 14(2), 113-127. Abstract: This article...

Silver Lining for Learning wins 2022 AECT Award

Silver Lining for Learning wins 2022 AECT Award

I learned, this morning, that the Silver Lining for Learning team (the webinar series I co-host with Chris Dede, Curt Bonk, and Yong Zhao) won the 2022 AECT Distinguished Development Award. For a completely unfunded, passion project that started at the beginning of...

The Deep-Play Group & our robotic overlords

The Deep-Play Group & our robotic overlords

The Deep-Play research group started as an informal group of faculty and graduate students at Michigan State University, mostly my advisees. It has now grown to include Arizona State University and a couple of people there. Of course my advisees...

Open source conferencing

Just found out about Dimdim (bad name!) from Manas Chakrabarti's blog, At Any Rate. Dimdim is an opensource, free web conferencing service where you can share your desktop, show slides, collaborate, chat, talk and broadcast via webcam with absolutely no download...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *