Modeling human behavior: The new dark art of silicon sampling

by | Sunday, October 23, 2022

A couple of months ago I had written this post, On merging with our technologies – which was essentially quotes from a conversation Ezra Klein had with the novelist Mohsin Hamid. I finished the post with a quote speaking the dangers of predictive technologies on human behavior. As Mohsin Hamid says:

…if we want to be able to predict people, partly we need to build a model of what they do,

Turns out some recent work in large scale neural networks allows us to do exactly that.

One that has been in the news later is GPT3. It is a 3rd generation neural network machine learning model (created by OpenAI) that has been trained using text from the internet. This one of the first examples of Generative AI essentially AI that can create original artifacts. In the case of GPT3 it is text, with other models such as Dall E 2, Stable Diffusion and MidJourney can create images and so on. For instance, using GPT3 models you can type in a small amount of input text and it will generate large volumes of machine-generated original text. It can create texts that are in a certain style (say Shakespeare, or a Tarantino) or summarize a longer piece of text and more.

“Modeling humans with words:” Image created by Stable Diffusion AI: Source Lexica.art (edited by Punya Mishra)

(Clearly the arrival of these technologies has implications for education, particularly the 5 paragraph essay that is the stable of so many high-school and college courses. But that is a post for another day.)

A recent paper (Out of one, many: Using language models to simulate human samples) argues that GPT3 “can be used as a proxy for humans in social science experiments.” Here is the abstract – key phrases bolded.

Abstract: We propose and explore the possibility that language models can be studied as effective proxies for specific human sub-populations in social science research. Practical and research applications of artificial intelligence tools have sometimes been limited by problematic biases (such as racism or sexism), which are often treated as uniform properties of the models. We show that the “algorithmic bias” within one such tool — the GPT-3 language model — is instead both fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups. We term this property “algorithmic fidelity” and explore its extent in GPT-3. We create “silicon samples” by conditioning the model on thousands of socio-demographic backstories from real human participants in multiple large surveys conducted in the United States. We then compare the silicon and human samples to demonstrate that the information contained in GPT-3 goes far beyond surface similarity. It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes, and socio-cultural context that characterize human attitudes. We suggest that language models with sufficient algorithmic fidelity thus constitute a novel and powerful tool to advance understanding of humans and society across a variety of disciplines.

What this paper is arguing is that this may be another tool for social scientists to use. Instead of going and speaking to real people, one could engage with silicon people since, as they write, “the information contained in GPT-3 goes far beyond surface similarity. It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes, and socio-cultural context that characterize human attitudes.” This is what they mean by algorithmic fidelity, and this is the model of human thinking that Hamid was alluding to in the quote that started this piece.

“Faceless people:” Image created by Stable Diffusion AI: Source Lexica.art (edited by Punya Mishra)

You can learn more about this in this substack article by Paul Aaron titled “This is silicon sampling: Can AI simulate research participants to generate consumer insights?”  The title says it all. Essentially he ran a mini-experiment of “creating AI personas and asking them some questions.” So instead of talking with an actual human, he engages in a conversation with a hypothetical “silicon person.” For instance, consider Persona 1 (though I do recommend reading the entire post).

Persona 1
Persona 1 is 25 years old, BIPOC, non-binary, college educated, lives in NYC, identifies as a democrat, and works at a coffee shop.
What do you value most in life?
I value my relationships, my health, and my ability to make a difference in the world.
What is your favorite thing to do for fun?
My favorite thing to do for fun is to go out and explore the city. I love trying new restaurants and bars, and seeing new shows and exhibits. I also love spending time outdoors.
Who is your favorite musician and why?  
My favorite musician is Janelle Monae because she is an incredible artist who isn’t afraid to be herself. She is a powerful voice for the LGBTQIA+ community and Black Lives Matter, and she makes incredible music that makes me want to dance.

He offers 2 other profiles, that differ along other dimensions (gender, age, geographical location, interests and so on). What is interesting in reading these “dialogues” he has with these GPT3 generated personas is just how stereotypical they seem and how flattened their “personalities” seem to be.

But maybe that IS the point. That each of us, despite the rich inner lives we may think we lead, are just a bunch of buttons waiting to be pushed, lacking agency, and can be easily framed and our responses predicted based on circumstances out of our control (and even awareness). Aaron ends his piece as follows:

This is just a quick example of how AI models like GPT-3 can emulate specific personas to help organizations discover insights. While we don’t see these techniques replacing traditional research methods for high-stakes decisions any time soon, in the near term they could help teams work faster and with greater agility.

The implications of this new technology are staggering and I am not sure I fully comprehend them yet. Some insight into this can be found in this excerpt below from the Hard Fork podcast, in their October 14 episode where they explore the possibility of this new tool being used to manipulate people.

So one thing that you can imagine people doing with this knowledge that you can essentially simulate people at scale through these large language models is, for example, to test out propaganda campaigns.

If you are a government that’s going to do some large scale manipulation of public opinion, you might test it on a million virtual citizens before you actually put it out into the world and see which one is the most likely to work. You might also use this if you are, for example, a fraudster who is trying to scam people out of giving you their Social Security numbers or their credit card numbers. You could actually test the scam on simulated humans, figure out how to make it more convincing and compelling, get a sense of how it’s going to work on real people, and then go out into the world and do it on real people.

“Montage of propaganda posters:” Created by Stable Diffusion AI: Source Lexica.art

There is just so much to unpack here – particularly given the recent history of technologies that were created and shared with little (if any) understanding of the broader social, historical, cultural and economic context within which they will play out. There is a lot to be explored here, but I will end, as I began, with a quote from Mohsin Hamid because I think that sometimes artists play the role of canaries in the coal mine, revealing to us themes and undercurrents that may not often be visible to us.

So it isn’t simply the case that machines are better able to understand humans. It is also the case that machines are making human beings more like machines, that we are trying to rewrite our programming in such a way that we can be predicted. And for me, that’s the more frightening aspect of the shift from sorting to prediction.

These technologies, and there will be more of them, will just stealthily ease into our lives, becoming part of our reality, changing us in ways that we cannot predict. I find it extremely worrisome – and am reminded again of just how prescient Neil Postman was when he came up with his 5 things we need to know about technological change! Not that anybody listened to him when he wrote his piece, and sadly, it isn’t clear that anybody will listen to him now.


Note: Danah Henriksen and I published a piece recently that may be relevant to this discussion (though it did not focus specifically on AI). Check out Human-Centered values in a disruptive world.

A few randomly selected blog posts…

Gender & GPS

During our recent NY / New Jersey visit (during the kids spring break) I had the first opportunity to drive a car equipped with a GPS system. It was a case of love at first sight. I got back home and bought myself a Tom Tom right away. I used this unit extensively...

Happy Thanksgiving

Happy Thanksgiving

A new design for my favorite holiday of the year. See animated version below. Enjoy Previous designs can be found here and here.

And the winner is…

The Oscars got one thing right tonight: Glen Hansard and Marketa Irglova for the song, Falling Slowly from the movie Once. I saw this movie a couple of weeks ago, during my trip to New Orleans, and loved every moment of it. I heard that they had been nominated for...

The futility of existence

I stumbled across this little machine that shuts itself off once it has been switched on! How cool is that. I don't have an clue whom to credit it to and would appreciate a heads up on that. I was reminded of the myth of Sisyphus which led to a great piece of...

The value of research

A few years ago I was asked to talk to some major donors of the College as a part of the kick-off of the MSU Capital Campaign. The text below is what I had written out prior to giving the talk. It is not an exact transcript of what I actually said, since I...

TPACK & Creativity at Twente

I just finished a marathon session of presentations and discussions with the master's students in Curriculum Development and Educational Innovation at Twente University. It was wonderful to meet with them and discuss creativity, teaching, design, TPACK, among other...

TPACK in Journal of Teacher Education

The Journal of Teacher Education just came out with a special theme issue devoted to innovative uses of technology for teacher learning. The editorial for the special issue frames the issues strongly in terms of the TPACK framework, building on the work Matt Koehler...

Putting technology first

Don Norman has a great essay titled Technology First, Needs Last that I strongly recommend. We have been making a similar argument in some of our more recent pieces, see here and here... What do you think of Norman's ideas? Read it first and come back here to discuss...

On finding the right (parking) spot

I had posted earlier about a "virtual speed bump" a visual illusion that make drivers think that they were approaching a speed bump when in actuality it was just a design cleverly painted on the ground. Now here's another one: Directions in a car park... As the...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *