Note: This blog post was almost entirely written by ChagGPT based on an analysis of a set of images I had uploaded onto it. The image above (Weird AI) is an original typographic design created by me. The background sky was created by Adobe Firefly.
To give some context, I took six slides from the set I had used for a recent keynote presentation I gave at the AI Summit in Michigan. Essentially these slides were an extended riff about how Generative AI was weird, along multiple dimensions.
I then made one large image from these slides (see below) and asked ChatGPT to interpret these images and write a blog post for me. The exact prompt and GPT’s response is given below. GPT’s words are in blue and have been included as is, with no editing. I have also added some references to each of the points related to the slides.
The 6 slides that were uploaded to ChatGPT
Prompt: There are a set of 6 slides that I used in a recent presentation to make the point that Gen AI is weird. Can you take this set of six slides and come up with a nice descriptive blog post on this topic where each of these images would be used to make a point about the weird nature of gen AI
Creating a descriptive blog post based on the provided slides can be a great way to delve into the peculiarities of generative AI and the biases inherent in these systems. Here’s a structured outline for your blog post:
The Weird Quirks of Generative AI: A Look at Bias and Unpredictability
Generative AI has become a cornerstone of modern technology, from chatbots to content creation tools. However, as advanced as these systems are, they often exhibit behaviors and outputs that can be deemed as weird or unexpected. A recent presentation highlighted these oddities through a series of insightful slides.
The acronym WEIRD captures the essence of generative AI’s primary dataset bias. These systems tend to reflect the biases of their training data, which disproportionately represents Western, Educated, Industrialized, Rich, and Democratic societies. This skew in perspective can lead to AI systems that perform better for users from these backgrounds while potentially alienating others.
It appears that generative AI can sometimes perform better when prompted with emotional requests. This suggests that the way we interact with AI can affect its performance, possibly because emotional prompts are more detailed and provide better context for the AI to grasp the user’s intent.
Reference: Large language models can be enhanced by emotional stimuli (Arxiv article: Link)
The Concept of ‘Tipping’ AI
A curious phenomenon observed is the improvement in AI performance when the concept of ‘tipping’ is introduced. While not a literal monetary tip, this metaphorical tipping refers to providing extra information or incentives that seem to spur the AI to ‘try harder’ or ‘think deeper’, resulting in better output.
Instructing an AI to ‘take a deep breath’ had an interesting outcome: a noticeable improvement in math scores in a study. This anthropomorphic instruction hints at the potential of using human-like encouragements to optimize AI performance, which is as fascinating as it is bizarre.
Reference: Telling AI model to “take a deep breath” causes math scores to soar in study (Ars Technica article: Link)
Seasonal Performance Variability
Interestingly, generative AI, specifically GPT-4, showed a dip in performance during December. This suggests that AI systems can ‘learn’ patterns from their interactions with humans, including the tendency to ‘wind down’ during holiday seasons. While this is likely a reflection of the training data, it showcases how AI can mirror human behaviors in unexpected ways.
Reference: Example of the December effect (i.e. GPT results are worse after December break). LINK
Implicit Bias in AI Systems
The final point of discussion is the implicit bias present in all AI systems. These biases can come from various sources, such as the data they’re trained on, the engineers who build them, and even the societal norms that sneak into their algorithms. Recognizing and addressing these biases is crucial for creating fair and equitable AI systems.
In conclusion, the presentation’s visuals offer a thought-provoking look at the oddities and biases of generative AI. These ‘weird’ behaviors underline the importance of understanding and improving the ways we build and interact with AI systems.
(Note: In your blog post, you would replace the placeholders like #image-1 with actual images from your slide deck.)
For any further assistance or to flesh out each section with more detail, feel free to reach out!
Not too bad!
What, I wonder is weirder, the fact that generative AI is weird in these specific ways, or that we take these capabilities for granted. None of this (uploading images and having GPT interpret and write a coherent piece of prose) was possible even six months ago. Let that fact sink in…
The entire TE150 team joined together to make a presentation to the College today as a part of the Online Teaching and Learning Colloquia. These sessions are sponsored by the MA-APPC, Center for Teaching and Technology, and the Center for the Scholarship of Teaching....
I have always been intrigued by the nature and role of the aesthetic experience in learning. A few members of the Deep-Play research group have been exploring this issue for a while (for instance we have written on, why science teachers should care about beauty in...
How does one best represent all the voting information that we now collect as a part of the electoral process? Here are a few websites that really stood out for me. Send me any more that you have and I can add them to the list. The first is a series of cartograms...
I had written previously (here and here) about the American Psychological Association's long connection with torture and other coercive information gathering techniques. I am still bothered by it. Today's NYTimes has a op-ed by Stanley Fish (titled Psychology and...
Since 2008 our family has been creating short videos to celebrate the end of one year and the beginning of another. Our videos are always typographical in nature with some kind of an AHA! moment or optical illusion built in. This year’s video is no different. Check it...
... or just narrow? I just discovered Britannica blog, a pretty lively virtual space for intelligent discussion. How I had not come across it earlier is a mystery - but again that is the beauty of the web. Anyway, there is an ongoing discussion there about how the web...
Jon Good has been playing around with some new 3D printers we just bought and this is what he printed for me - a 3-D version of the "praise-blame" ambigram (click here for the 2-D version). How cool is that! So what you are seeing in the top half is the printed...
The evolution of the TPACK image (1999 - 2017) Note: Apologies in advance for the long post. This has been festering / brewing for a while and I wanted to get it right. In essence this post offers a tweak to the canonical TPACK image, explained in greater detail...