This is the fourth of what was supposed to be a three post-series about how media influence our thinking. The first post, uses the invention of writing and print to unpack the meaning of McLuhan’s statement, “The medium is the message.” The second post, focuses on a story by Ted Chiang that takes these academic ideas about the psychological and social implications of the advent of a new technology and frames them within two human narratives and in that process goes deeper into than any academic article ever could. The third post builds on a piece written by Neil Postman over 30 years ago and shows how the questions and issues it raises are still relevant for us today. Two additional posts followed: Four and five.

The award novelist Mohsin Hamid was a guest on the Ezra Klein show to discuss his latest book “The Last White Man.” The conversation is worth listening to in full since it covers a lot of ground around race, identity, algorithmic thinking and lots more. In this blog post I just wanted to draw attention to one part of the conversation, where they discuss the impact of current and new technologies on us as individuals and as a society.
 
I particular I liked the idea of that today’s social media culture described as “machine based sorting + prediction culture” and the implications of that. I am reminded of what Yuval Harari once described his experience of coming out as gay when he was 21 and wondering what it would have been like if he had grown up in today’s social media environment. He argues that Facebook or YouTube algorithms would have “known” of his sexual orientation before he would have. As he said, “there is something incredibly, like, deflating for the ego, that this is the source of this wisdom about myself, an algorithm that followed my movements?” He goes on to suggest that a consequence of algorithms knowing “us” better than we know ourselves is that our brains can be hacked. For instance if Coca-Cola knew he was gay (even before he knew it) it could target ads to him and influence his behavior, without his ever being cognizant of it.  

I am including below some key quotes from the conversation Ezra Klein had with Mohsin Hamid. It seems to me that the ideas in the quote below flow well from my previous three posts about unpacking McLuhan (I, II, III).

Some key quotes:

Mohsin Hamid: … in our current technological, cultural moment, what’s happening is that, as we merge with our screens — and we spend an enormous amount of time staring into our screens and doing things with them — we’re encountering a sort of a machine culture that is, by its very nature, sort of sorting-based. A huge amount of our cultural activity now is sorting things. Do I like this or not like this, do I follow this person or not follow this person, does this person like me or not like me? And if I identify a meaningful aspect of their identity, which is not like me, the person is fundamentally, in some way, opposed to me and in conflict with me. And what we have to do, for a lot of people, is to separate from or, even worse, extinguish the people who are not like us

Ezra Klein: … I think you’re heavily talking here about social media technologies, about identity technologies online, about the way we spend our time and have things given to us now digitally — round one was sorting and round two has been prediction, both in terms of all the algorithms predicting what we’ll like, and in that way shaping what we end up liking, but also in the sense of the political campaigns that unleashed their algorithms on huge amounts of consumer data to figure out who we’re going to vote for, the advertising campaigns that are sorting us into this kind of consumer, that kind of consumer. And there’s an interesting way in which the lived economic reality of endless prediction conflicts with what we tell children and sometimes tell ourselves, which is that we’re all individuals, we’re all special, you can’t judge people by their group or their appearance because you’ll get it wrong.

MOHSIN HAMID: I think that’s right. And I think that, for me, one thing which is very interesting is, when we talk about going from sorting to prediction — which I think is correct, that is something that’s happening — we tend to imagine that predicting is an observational activity. In other words, that technology is allowing us to see where we might go as individuals and to predict. But I think that prediction is actually much more perniciously a behavior modification activity. In other words, making us into more predictable beings.
And that, I think, is by far the greater danger. In other words, if we want to be able to predict people, partly we need to build a model of what they do, but partly we would want them to be predictable. They should be inclined towards doing certain things. And so if you take somebody with the sorting mechanism, if you give them information that plays upon humans’ innate sense of prioritizing the information about threats — economic threats, racial threats, we prioritize that information — what begins to happen is it’s not just that the way we were going to behave remains unchanged. The way we are going to behave also changes. And it changes in predictable ways.
So it isn’t simply the case that machines are better able to understand humans. It is also the case that machines are making human beings more like machines, that we are trying to rewrite our programming in such a way that we can be predicted. And for me, that’s the more frightening aspect of the shift from sorting to prediction.