Modeling human behavior: The new dark art of silicon sampling

by | Sunday, October 23, 2022

A couple of months ago I had written this post, On merging with our technologies – which was essentially quotes from a conversation Ezra Klein had with the novelist Mohsin Hamid. I finished the post with a quote speaking the dangers of predictive technologies on human behavior. As Mohsin Hamid says:

…if we want to be able to predict people, partly we need to build a model of what they do,

Turns out some recent work in large scale neural networks allows us to do exactly that.

One that has been in the news later is GPT3. It is a 3rd generation neural network machine learning model (created by OpenAI) that has been trained using text from the internet. This one of the first examples of Generative AI essentially AI that can create original artifacts. In the case of GPT3 it is text, with other models such as Dall E 2, Stable Diffusion and MidJourney can create images and so on. For instance, using GPT3 models you can type in a small amount of input text and it will generate large volumes of machine-generated original text. It can create texts that are in a certain style (say Shakespeare, or a Tarantino) or summarize a longer piece of text and more.

“Modeling humans with words:” Image created by Stable Diffusion AI: Source Lexica.art (edited by Punya Mishra)

(Clearly the arrival of these technologies has implications for education, particularly the 5 paragraph essay that is the stable of so many high-school and college courses. But that is a post for another day.)

A recent paper (Out of one, many: Using language models to simulate human samples) argues that GPT3 “can be used as a proxy for humans in social science experiments.” Here is the abstract – key phrases bolded.

Abstract: We propose and explore the possibility that language models can be studied as effective proxies for specific human sub-populations in social science research. Practical and research applications of artificial intelligence tools have sometimes been limited by problematic biases (such as racism or sexism), which are often treated as uniform properties of the models. We show that the “algorithmic bias” within one such tool — the GPT-3 language model — is instead both fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups. We term this property “algorithmic fidelity” and explore its extent in GPT-3. We create “silicon samples” by conditioning the model on thousands of socio-demographic backstories from real human participants in multiple large surveys conducted in the United States. We then compare the silicon and human samples to demonstrate that the information contained in GPT-3 goes far beyond surface similarity. It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes, and socio-cultural context that characterize human attitudes. We suggest that language models with sufficient algorithmic fidelity thus constitute a novel and powerful tool to advance understanding of humans and society across a variety of disciplines.

What this paper is arguing is that this may be another tool for social scientists to use. Instead of going and speaking to real people, one could engage with silicon people since, as they write, “the information contained in GPT-3 goes far beyond surface similarity. It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes, and socio-cultural context that characterize human attitudes.” This is what they mean by algorithmic fidelity, and this is the model of human thinking that Hamid was alluding to in the quote that started this piece.

“Faceless people:” Image created by Stable Diffusion AI: Source Lexica.art (edited by Punya Mishra)

You can learn more about this in this substack article by Paul Aaron titled “This is silicon sampling: Can AI simulate research participants to generate consumer insights?”  The title says it all. Essentially he ran a mini-experiment of “creating AI personas and asking them some questions.” So instead of talking with an actual human, he engages in a conversation with a hypothetical “silicon person.” For instance, consider Persona 1 (though I do recommend reading the entire post).

Persona 1
Persona 1 is 25 years old, BIPOC, non-binary, college educated, lives in NYC, identifies as a democrat, and works at a coffee shop.
What do you value most in life?
I value my relationships, my health, and my ability to make a difference in the world.
What is your favorite thing to do for fun?
My favorite thing to do for fun is to go out and explore the city. I love trying new restaurants and bars, and seeing new shows and exhibits. I also love spending time outdoors.
Who is your favorite musician and why?  
My favorite musician is Janelle Monae because she is an incredible artist who isn’t afraid to be herself. She is a powerful voice for the LGBTQIA+ community and Black Lives Matter, and she makes incredible music that makes me want to dance.

He offers 2 other profiles, that differ along other dimensions (gender, age, geographical location, interests and so on). What is interesting in reading these “dialogues” he has with these GPT3 generated personas is just how stereotypical they seem and how flattened their “personalities” seem to be.

But maybe that IS the point. That each of us, despite the rich inner lives we may think we lead, are just a bunch of buttons waiting to be pushed, lacking agency, and can be easily framed and our responses predicted based on circumstances out of our control (and even awareness). Aaron ends his piece as follows:

This is just a quick example of how AI models like GPT-3 can emulate specific personas to help organizations discover insights. While we don’t see these techniques replacing traditional research methods for high-stakes decisions any time soon, in the near term they could help teams work faster and with greater agility.

The implications of this new technology are staggering and I am not sure I fully comprehend them yet. Some insight into this can be found in this excerpt below from the Hard Fork podcast, in their October 14 episode where they explore the possibility of this new tool being used to manipulate people.

So one thing that you can imagine people doing with this knowledge that you can essentially simulate people at scale through these large language models is, for example, to test out propaganda campaigns.

If you are a government that’s going to do some large scale manipulation of public opinion, you might test it on a million virtual citizens before you actually put it out into the world and see which one is the most likely to work. You might also use this if you are, for example, a fraudster who is trying to scam people out of giving you their Social Security numbers or their credit card numbers. You could actually test the scam on simulated humans, figure out how to make it more convincing and compelling, get a sense of how it’s going to work on real people, and then go out into the world and do it on real people.

“Montage of propaganda posters:” Created by Stable Diffusion AI: Source Lexica.art

There is just so much to unpack here – particularly given the recent history of technologies that were created and shared with little (if any) understanding of the broader social, historical, cultural and economic context within which they will play out. There is a lot to be explored here, but I will end, as I began, with a quote from Mohsin Hamid because I think that sometimes artists play the role of canaries in the coal mine, revealing to us themes and undercurrents that may not often be visible to us.

So it isn’t simply the case that machines are better able to understand humans. It is also the case that machines are making human beings more like machines, that we are trying to rewrite our programming in such a way that we can be predicted. And for me, that’s the more frightening aspect of the shift from sorting to prediction.

These technologies, and there will be more of them, will just stealthily ease into our lives, becoming part of our reality, changing us in ways that we cannot predict. I find it extremely worrisome – and am reminded again of just how prescient Neil Postman was when he came up with his 5 things we need to know about technological change! Not that anybody listened to him when he wrote his piece, and sadly, it isn’t clear that anybody will listen to him now.


Note: Danah Henriksen and I published a piece recently that may be relevant to this discussion (though it did not focus specifically on AI). Check out Human-Centered values in a disruptive world.

A few randomly selected blog posts…

Two new photosets

I just uploaded two new photosets onto to Flickr. They are: Best of 2007 A photoset documenting the past 12 months (mostly family related stuff) Matt & Punya There was a recent article in the New Educator about the work Matt and I do together (the TPCK stuff). Here...

Online physics-based games

Physics Games - online physics-based games. Some cool stuff here. For instance check out Demolition City Online Physics Games

The Ethics of Dallas Clayton

I just stumbled upon Dallas Clayton's website. Lots of stuff there to enjoy... here's a short poem (as a sampler). ETHIC A father stands at the lip of the wharf with his daughter who is only three. They watch sea lions lounging about in the sun full with fish dazed...

Obtuse can be right!

My daughter, whose creative exploits have been featured here before (for instance see her design for a math-music game), now has a blog, titled Uniquely Mine. It features original writing (poems, stories) by her. Do check it out. You can find regular updates on this...

Ganesh, new ambigram, & old video

Ganesh, new ambigram, & old video

One of the big parts of my life over the past decade or more, has been the Ganesh Festival celebrations in Lansing with friends and family—Good food and good times. Of course this year I have to miss all the fun - being here in Phoenix. I have kept up with all...

But is it cheating? AI in Education podcast episode

But is it cheating? AI in Education podcast episode

I was recently invited as a guest on the 3Ps in a Pod, a podcast from Arizona Institute for Education and the Economy at Northern Arizona University and the Arizona K12 Center. I joined hosts Dr. Chad Gestson and Dr. LeeAnn Lindsey to discuss a topic that has been on...

What can design do for you?

TPACK involves understanding the capabilities of technology - understanding how we make meaning with it, how we can manipulate it to communicate, engage and teach. I include below an extraordinarily powerful use of media, created with the simplest of tools, one...

TPACK Newsletter #3: May09 Edition

TPACK Newsletter, Issue #3: Late April 2009 Welcome to the third edition of the TPACK Newsletter, now with 362 subscribers (representing a 30% increase in the last two months!), and appearing bimonthly between August and April. If you are not sure what TPACK is,...

Advancing Education in the AI Era: Video Update

Advancing Education in the AI Era: Video Update

I was recently in Washington, DC, for a panel discussion titled "Advancing Education in the AI Era: Promises, Pitfalls, and Policy Strategies" organized by the Center for AI Policy. The panel, moderated by Jason Green-Lowe of the CAIP, also included Michael Brickman,...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *