{"id":11743,"date":"2023-07-26T08:40:36","date_gmt":"2023-07-26T15:40:36","guid":{"rendered":"https:\/\/punyamishra.com\/?p=11743"},"modified":"2023-08-22T08:07:33","modified_gmt":"2023-08-22T15:07:33","slug":"chatgpt-is-a-smart-drunk-intern-3-examples","status":"publish","type":"post","link":"https:\/\/punyamishra.com\/2023\/07\/26\/chatgpt-is-a-smart-drunk-intern-3-examples\/","title":{"rendered":"ChatGPT is a smart, drunk intern: 3 examples"},"content":{"rendered":"\n
Harry Frankfurt the philosopher passed away, this past Sunday. He was 94. As the NYTimes obituary said, he was… <\/p>\n\n\n\n
\n… a philosopher whose fresh ideas about the human will were overshadowed in the broader culture by his analysis of a kind of dishonesty that he found worse than lying \u2014 an analysis presented in a bluntly titled surprise best seller, \u201cOn Bullshit\u201d…<\/p>\n<\/blockquote>\n\n\n\n
I had been inspired by Frankfurt’s work to write a blog post (ChatGPT is a bulls*** artist<\/a>) about how his ideas can help us understand what ChatGPT and other generative AI tools actually do. Over the past few months of working and playing (more play than work, honestly) I have come up with another metaphor. <\/p>\n\n\n\n
I have come to realize that working with generative AI is like having, at your beck and call, a really smart, but (occasionally) drunk, intern. <\/strong><\/p>\n\n\n\n
(For some reason, I keep thinking that this idea of ChatGPT being a smart, drunk intern, is not original to me. I have a vague recollection of reading this somewhere on the web, but despite multiple searches, I can’t seem to locate the source.) <\/p>\n\n\n\n
First these tools are intelligent. I do not want to get into a discussion on defining intelligence, but in the most basic sense of the word (of having the capacity to learn, adapt, understand \/ handle abstract concepts and solve problems), there is no doubt in my mind that these tools are intelligent. <\/p>\n\n\n\n
Second, these technologies are conversational, in that they use language (a uniquely human capability), and can understand and respond to queries and prompts in a threaded manner, guided by context and the history of prior interactions. This combined with its expertise make it an ideal working partner, a smart intern as it were. <\/p>\n\n\n\n
The true potential of this technology shines through when we regard it, not as a search engine that spits out the right answer, but rather as a partner, or a collaborator. Moreover, it is tireless. Never gets bored, even when asked to preform the most trivial of tasks. And it is almost puppy-like in its eagerness to help. They can, in pretty sophisticated ways, help you compare concepts, construct counterarguments, generate analogies, analyze data, and evaluate logic\u2014in short help you think and get tasks done. <\/p>\n\n\n\n
There you have it, a pretty smart intern. <\/p>\n\n\n\n
Sadly, this intern sometimes hallucinates, and makes things up. Moreover, it is quite confident of the quality of its output. And as you can imagine, that can be a problem. That is where the “drunk” part comes in. (Incidentally, it appears that GPT can do a pretty decent imitation of a drunk person<\/a> – given the right prompts of course. But that is not what I am talking about.) <\/p>\n\n\n\n
Instead of speaking in the hypothetical, I would like to share three recent examples that show case the talents and pitfalls of using genAI. The powers it gives us if engaged correctly and the ways in which it can lead us astray. Each of these examples builds on my explorations with Code Interpreter (and GPT4). Though my explorations of alcohol content and wine quality<\/a> was fun – I wanted to take it further, towards more relevant, and authentic questions. Below are three examples, of data analysis using Code Interpreter. The first is one that I did with some survey data, the other two were explorations with two of my colleagues, with very different kinds of data and analysis.<\/p>\n\n\n\n
\n\n\n\n