Dede interview

Punya Mishra: All right, because if I forget that, that’s horrible. So I’m gonna read out a little bit of an intro just in case our audio guy wants it for the Learning Futures podcast. Um, and then we’ll jump into the questions. And it is really, uh, those questions where something we came up with conversation will float and weave around as we may see fit. We rarely never stick to the script, so, alright, with that, welcome to the Learning Futures podcast. I am your host, Punya Mishra, joined today by Dr. Melissa Warr, faculty member at New Mexico State University. Welcome, Melissa. How are you? I’m good. How are you, Punya? Good. Well, I’m super excited about this conversation today with our guests, Dr. Chris deedee. Chris, and this is not an exaggeration, one of the most influential scholars in the field of educational technology and innovation. He’s currently senior research fellow at the Harvard Graduate School of Education. Prior to this, he was the Timothy e Worth Professor in Learning Technologies at the same school. His interests are in developing new types of educational systems to meet the opportunities and challenges of the 21st century. His research spans emerging technologies for learning, infusing technology into large scale educational improvement initiatives, developing policies that support educational transformation and providing leadership and educational innovation. I think most pertinent to our conversation today that he’s currently co-PI of the NSF funded National Artificial Intelligence Institute in Adult learning and online education. Chris has truly been a leader in the field and has inspired a generation of educators and educational researchers. I’m proud to call Chris a close colleague and friend. So thank you Chris for joining us for the Learning Futures Podcast. Chris Dede: Well, thank you Punya. And it’s been a highlight for me to be a co-host with you of the Silver Lining for Learning series. I met with three advisees this morning. Each one of them was from a different part of India. I referenced the four episodes that you and I had done recently for S L L, and they left excited and ready to learn more. Punya Mishra: Cool. No, I mean, uh, thanks for making a plug for S L l. I forgot to do that. It’s always good to plug our own, uh, webinar. But, uh, with that, I’m just gonna open it up. So, uh, tell us a little bit about your background and particularly how you got interested in this topics related to AI and learning and creativity. Because I listened to you in a podcast and some other podcast recently, and you have been sort of fiddling with these topics since the seventies or eighties. So it’ll be wonderful to get your perspective on when everybody’s going. Sort of crazy right now that these are topics and ideas that have been around for a while and you have been working in this space. Chris Dede: Yes. Over half a century ago, when I was a graduate student, slightly after, when the dinosaurs went extinct, I, um, read an article on AI in education, the very first one published in 1970. And it confidently predicted that within six years we wouldn’t need teachers anymore because AI was going to take over. So I’ve lived through many cycles of advances in AI and been impressed by the progress. I’ve lived through many hype cycles of AI and been depressed by the, um, glowing, uh, endorsements that somehow never worked out. And, uh, it’s an exciting time to be in the era of generative ai, which is a major advance in the field, and yet somehow less than it’s being portrayed by a lot of media. Melissa Warr: That’s, that’s interesting. You called it generative ai. What’s the difference between generative AI and other AI types of ai? Chris Dede: Well, it’s, it’s actually sort of a technical term within the field, and it means that AI is generating performances that we have previously thought were perhaps limited to human beings. So it’s a step more in terms of what ai AI can do. As an example, a search engine involves ai, um, and it, it generates a list of resources, uh, from which the human being picks. Why this is called generative AI is that it synthesizes things from those resources and gives you in a sense the outcome. Now, it may be a good outcome or it may be a bad outcome, but the fact that it’s leaping to the outcome as opposed to giving the human being the chance to shape the outcome, that’s a really interesting development. Punya Mishra: So, uh, Chris, uh, I don’t know if you saw this article by Cheang in the New Yorker. I think it came out just a couple of days ago, and I really found it one of the best descriptions of generative AI along the lines of what you were saying, because he described it as Chat GPT as one example of generative AI is a lossy compression of the web. And the analogy he does is to a jpeg file where you compress by keeping certain key colors and interpolating inbetween. And that’s what leads to, you know, those fragments of noise that you see in an image. And that’s where G P T three also falls flat, is in that interpolation. It actually makes these leaps, uh, which it really, you know, because it’s sort of, it’s a lossy compression of the information that it’s found there. Chris Dede: I think that’s partly true. Um, but I I think that it’s more than that. And, um, an example would be, let’s say that I live in Des Moines, Iowa, and I read about all these floods and fires on the West coast and climate change. And I think, well, I don’t live on the west coast. I’m not near an ocean, so I’m not gonna worry about this. And in fact, I’m pretty sure that nobody has ever done a study on the effects of mid-range climate change on flooding, specifically on street corners in Des Moines, Iowa. Generative AI can, can make that prediction because it can pull data from topological databases, from meteorological databases, from other kinds of databases, and come up with a forecast that’s beyond just sort of gluing together stuff on the web. Mm-hmm. , it’s actually a powerful form of, of, you know, big data and, uh, large language models working together to create something. Now, is the forecast necessarily accurate? Well, it’s no better, it’s no better than the data, but, um, it, it is interesting that it can do that, and that that would involve for, for New York, it might involve actually what the New York, uh, the New York Times is describing, but where the forecast hasn’t been done and and generative AI can generate it, that’s beyond what the, the Times is describing. So that’s, Punya Mishra: Oh, that’s really nice, because what it says is that I think what Ted Chan is talking about is sort of interpolation and what we are talking about is sort of extrapolation that you can go beyond, and that’s where the risk lies in the sense that you could get it right or wrong, but it has the ability to do that. And quite confidently, I must say, . Chris Dede: Yes, yes. It has no, there’s no error bars on, on what it comes up with, which is another difference between people and and ai. And so, uh, we really, oh, Punya Mishra: I know a lot of people who have no error bars at all. Well, that’s Chris Dede: True as academics, you’re right, that’s true. But, uh, caveat emptor is definitely what we should have for ai. Punya Mishra: Right, right. Melissa, do you wanna jump in? So, Melissa Warr: Yeah, I was gonna say, so given that, is it fair to call AI intelligence if it’s making these extrapolations in these leap leaps? Can we fairly call it intelligence? Chris Dede: Well, what there, there’s, if you bring together, um, 20 experts in psychology and you ask what intelligence is, you get 23 definitions. Um, so, and it’s also something that changes over time because things that we previously regarded as intelligent, now we’re seen as, oh, an algorithm can do that. So I guess it isn’t intelligence. What, what I would say is that the, the kind of intelligence that’s useful, uh, understands all the dimensions of a problem. So if someone’s dying of cancer, um, and you you want to advise them about treatment options, you’re, you’re not just making a forecast of, well, with this treatment, you’ll live this long and with this other treatment, you’ll live that long, which is what AI can do. You’re, you’re saying, what is, what is your belief about quality versus quantity of life? How is your family going to be influenced by when you choose to die and how, um, what are your spiritual beliefs in terms of, of life? AI does not have that kind of intelligence. It can do calculative reckoning where you’re working with data of different kinds and combining them in powerful ways to produce a forecast. And, and we have regarded that as intelligent. And I would say that to some extent it is intelligent, but it’s not what, what I would think of as a strong definition of intelligence, which involves common sense, which involves understanding what it means to have a body, what it means to have a culture, what it means to have an ethical system, and so on. Punya Mishra: That’s, I’m so glad, Chris, that you brought that up because, you know, interestingly, I got interested in psychology and cognitive science and my whole career because I read Douglas Hofsteder Goode chair back Bach back in high school, and that got me reading about WNBA and Eliza, and I remember there was this book, which, um, was sort of critical of AI by Richard Dreyfus, I think it was Richard or by Dreyfus called what computer? Yeah, Chris Dede: Herbert Dreyfus. Yeah, Punya Mishra: Herbert Dreyfus. Thank you. Richard Dreyfus being the actor who I’m sure has written books, but not about ai. Um, which the title of the book was What Computers Can’t Do. And I think the, at the core of the argument was exactly the point that you were making that, that, that knowledge, information processing, all of those things are deeply embodied in our bodies in the broader cultural matrix, in the broader social matrix. And that AI would never have that. And I remember it threw all these guys into a conniption because they, at some level were thinking of cognition as being in a vat. You know, there were all these stories about like, you know, can a brain be in a vat and all of that. And I think that many of those conversations have cycled back again today. Chris Dede: They have cycled back Punya Mishra: About ai. So I don’t know what your thoughts are. Chris Dede: Well, I, like you, I was quite influenced by Hofsteder girdle Leach, an eternal golden braid. I was influenced by dreyfuss and the debates about, um, AI being more than cognition, you know, and, and Surl and the Chinese box. Chinese, Yes. Um, but, um, what Drefus said is that he would be impressed by symbolic AI if it could listen to a children’s story and understand what was going on. And that’s a really good way of putting it, because there’s lots of kinds of culture and causality that are embedded in children’s stories. So where are we now? Well, I think that if, if, I haven’t tried this, but I think if you told Goldilocks and the Three Bears to generative ai, it would probably now understand in contrast to the 1980s that this is not a factual story, right? Mm-hmm. , it’s a symbolic story. And it would probably understand, you know, the, this sort of tension between too big and too small. But if, if you started a dialogue with it about the story, if, if, you know, the first line is Goldilocks, uh, went into the woods and got lost, and if you stop there and you say to generative ai, well, why do you think Goldilocks went into the woods? Was she being naughty? Was she adventurous? Uh, was she deeply concerned about climate change and wanted to start studying it? Generative AI would be completely lost and at sea at that point. So it’s maybe one level deeper in understanding the children’s story, but it’s not three or four levels deeper, which is really what you need for the kind of intelligence that we’re describing. Punya Mishra: That’s interesting. That is super interesting. I don’t know, um, Melissa, do you wanna jump in or I could, I could go whatever. Yeah, Melissa Warr: Yeah. Um, I mean, so what, what help, sorry. So you said you’ve thought about AI in education for a long time. What, what do you see, what role do you see a AI playing in education? Chris Dede: Well, unfortunately, a lot of the emphasis on AI and education, going all the way back to that 1970 article has been about replacement. It’s been about automating different kinds of roles, and AI takes over the role and much of the history of ed tech as teachers being wary about ed tech because they think, well, you know, people want to automate my job and, and take it away from me, which isn’t typically the case with EdTech. But because of the emphasis within AI on replacement, you can understand where some of that fear might have come from. What I’ve always been interested in is what people in AI can do together based on complimentary strengths and the whole being more than the sum of the parts. What can a human AI combination do that neither the human nor the AI alone can do? And that’s been a theme in science fiction for a really long time. I mean, the really nuanced discussions of intelligence augmentation IA have come out of science fiction. You look at Star Trek, the next Generation, you’ve got data, the Android, who is in fact a AI-based machine, and you’ve got the card, the captain, who is the wise human being, and the two of them really work well together because each has something that the other one lacks. And in the same way now, as, as I and others study IA, the question is, if AI takes over part of the partnership, unfortunately the part that we’ve primarily been educating, so now we’re preparing John Henry to lose to the steam engine, um, what is the other part? What is the secret sauce that human beings bring that AI can’t do? And I think we’re, as AI evolves, that that other half changes, but I think it’s in fact, well more than half of really interesting work and something that we’re sorting out with this las latest generation of ai. Punya Mishra: So one of the things that I often sort of, I think about is, you know, that, that how we define human capabilities, intelligence, talents and so on, is often very much determined by sort of the current technology of the time. So when we had clay tablets, the brain was a thing where you could make impressions on, and then of course, we had the loom and you know, we go on, on the computer. So there is sort of a risk to that as well, right? Because you are taking, um, metaverses, which are essentially instrumental and technological and seeing human capabilities in those terms. So I wonder what your thoughts are about that, sort of the reverse flow of ideas from computation into how we think about human cognition, and what do we lose out in that process? Do we start seeing people as these sort of machines which can be manipulated, you know, or complex computer systems rather than some of these ideas that you’ve been talking about? Or is this the point where we can actually start focusing on some of these elements which make us deeply human and then make that separation? I’m just wondering about what you think about that. Chris Dede: Well, I, I think that this is a tension that you’re describing that has been around in these various disciplines for a long time. I mean, behaviorist psychology basically eliminated consciousness and just saw people with stimulus response machines. Cognitive psychology was certainly an advance on that. But again, it was all about cognition and not really understanding that the complimentary roles of emotion and social consciousness and so on, that, that in fact are not separate from cognition. They’re all linked together richly in the brain. And now, you know, with generative ai, you have people saying, oh, well, this must be senti because it can do X or Y. Well, uh, that’s a considerably lower threshold for sanctions than I would use, but okay. So it, it goes back down to what really, what really is wisdom, I think is, is the end, uh, challenge that we’re looking at here, because knowledge, yeah, knowledge, um, generative AI can have more knowledge really than any single human being, at least in the sense of rapidly being able to look it up and produce it from, as you say, interpolating on the internet. But wise use of that knowledge requires all sorts of things that AI cannot do. And so there is that, that human wisdom that, that we retain, that really involves a kind of knowledge of culture, and the knowledge of what it means to have a body and the what it means to have spiritual values that, that AI is not going to get to. AI is not a weak form of human intelligence. It is an alien intelligence. It is as alien as anything that you’re going to find in outer space. And, and the strength is that because it’s alien, it truly is complimentary. But the weakness is that, uh, we hold up, we hold up AI as sort of, okay, if it can be humans and go, then it must be smarter than humans, and we need to, uh, see ourselves in that light. No, not really. Punya Mishra: So, Chris, I’m gonna push on that a little bit. So I think that as these systems enter into our lives, they will be forced to take moral decisions. So I have a Tesla car, I haven’t paid for the self-driving, but at the end of the day, that car is solving the trolley problem every moment that is out on the road, it is deciding whether it’s okay to protect me versus hit a wall versus somebody on a bicycle. Those are kinds of decisions that are going to be taken by ai. Um, if we think about AI in an educational situation, determining what kind of an intervention a child requires or not, you know, we are gonna increasingly see those kinds of decisions starting to transfer to ai, irrespective of whether you and I might agree, or Melissa and I might agree that AI is not, it’s an alien intelligence. It’s not ready for that. I wonder what your thoughts are about the fact that it’ll just enter into our lives and then it’ll become part of the new normal, so to speak, and it’ll be taking decisions which ideally it should not be taking. Chris Dede: Right? So, um, I think that that’s a really good way to look at what’s going on. Um, I’m spending a lot of time these days thinking about a game that I and many others helped to develop about 14 years ago. It’s called Quandary. And it actually started in, uh, my motivation and learning course, which I’m still teaching after all this time, uh, with an advanced leadership initiative fellow at Harvard who sat in on the course and said, she said, I’d really like to create a game that teaches ethical decision making. And over a couple years, I and many others and students from the course worked with her. And they produced this game in which you are, uh, the head of a colony on, uh, another planet somewhere out in the universe, and you’re faced with quandaries, which are situations in which no matter what decision you make, somebody is gonna be hurt. So they’re decisions that don’t ever come out, win win. It’s always win lose because of the nature of the decision. And so ethically, how do you resolve quandaries? That’s what the game is about, teaching a kind of process of ethical reasoning. And now, uh, we’re in the competing in the DARPA Twos challenge to see if we can get some money to do quandary. Version 2.0, version 1.0 has been on the web forever. It’s free, it’s had 2 million plays, so it’s scaled really well. Um, 2.0 would add in AI-based assistance to the human colony leader. So now, you know, there’s another voice. The AI is saying, well, if you do this, this is gonna be the outcome. And if you do that, that’s gonna be the outcome. And if you think from a utilitarian perspective, then this is what you should do. But if you think about, you know, the least harm to the, to most, to the most vulnerable, then this is what you should do. Um, it, it enriches that dialogue and potentially, um, results in better ethical decision making as an IA device. That’s your partner. Now, what happens with Tesla and the self-driving vehicles is a sort of a small scale version of Skynet and the Terminator movies, which are about what happens when you give life and death power to an automated machine. Well, in the Terminator movies, that doesn’t have a happy ending. Um, and, and I would argue that, that it’s maybe not going to have a happy ending, even smaller scale mm-hmm. in things like automated, uh, you know, driving. So think about surgical robots. Surgical robots are better than a surgeon given certain kinds of programming that has to happen in advance. But what you don’t do with a surgical robot is say, why don’t you take over the operation? Just cut wherever you think you need to. I mean, , that would be horrible. So automating driving is one of those things that is maybe not a good idea to do. Yeah, Punya Mishra: Yeah. Chris Dede: Um, because of the ethical kinds of choices involved, not that human beings necessarily make good ethical choices when they’re driving either, but at least you can educate the human being to try to make good ethical choices in a way that’s difficult to educate the ai. Punya Mishra: So I just have a quick comment here. So one of the things that ha was not given a lot, did not receive a lot of attention, was this ai, uh, system that Facebook created that plays this game of diplomacy. Yes. Which is all about forming alliances, cheating, lying if possible, and it plays it really, really well. And to me, that was one that sort of went under the radar as being something which, again, can be used in the IA mode when, like, if you’re in a negotiation situation, it can be a great assistant. But in a world where so much stuff is happening sort of through online systems, uh, where you don’t even know whether it’s a real person or the other end, I could see a lot of sort of crazy stuff going on because of systems like that. Chris Dede: You, you can, and, and it’s always important to remember that any game or any model, which is really what AI ends up dealing with, is a model, is an oversimplification of reality. And it’s often bounded in ways that, that, um, are unrealistic. Mm-hmm. . So let’s say that I’m part of a game playing group, and that we like to play risk or other diplomacy games. And, and over time, uh, I become really the master. I’m, I’m very adept at sort of, um, building trust and backstabbing and doing all those other kinds of diplomacy things. What are the consequences for the friendships long term, right? The AI can understand how to play the game well, but what about the big G game that surrounds the little g game? AI has no sense of that or what it may be creating in the Big G game. So in the same way, we wouldn’t really want the AI making the diplomacy decisions in the negotiation, because even if you get a tactical win, you’re likely to get a strategic loss. Punya Mishra: Melissa, you wanted to transition a little bit towards sort of augmented, uh, intelligence and education. I love that question. So Melissa Warr: Go ahead. Yeah, yeah. I, I was curious because, you know, I think a lot about, you know, what we’re, what we’re teaching kids and how we prepare kids, and, um, it, how do we prepare them to work with AI to this form of intelligence augmentation idea? How do we prepare kids to operate in that mode? Chris Dede: Well, I think, I think the hand ringing about writing is, is interesting, and it’s an example of that. Um, so AI can now do descriptive writing, and in fact, the kind of essays that kids have had to write for the s a t, um, are now not a good question because ai AI can ace that test from one end to the other, which shows that we’re, you know, again, preparing people to lose to AI instead of focusing on what people can do differently and better. But, um, anybody who, who makes their living by writing will tell you that they’re not making their living by descriptive writing. I mean, a reporter begins with having to get a description, but if you stop there, you’re never going anywhere in the profession. You’re gonna end up writing the obituaries and, and the pet shows. Because narrative is what distinguishes a, a, a descriptive story from a great story, narrative, that, that puts the story in a larger context that tells a story that people can relate to, that they see themselves in and so on. So give, give kids the descriptive story and say, turn this into a really compelling narrative that within your culture speaks to people in a deep way. They see themselves inside of it, they see the story relating to other kinds of stories that may be part of their culture and so on. That would be an example of, of how to, how to handle this. And, you know, this isn’t new with ai. Um, I spent hours and hours and hours when I was in elementary school doing worksheets, unfortunately, math worksheets, you know, and I got to the point that I could multiply faster than anybody else in the class, and that my answers had a higher percentage of correctness than everybody else. What a genius I was. Well, a a $3 so powered calculator can run rings around me. And in fact, there is no workplace in the world now, even in the most primitive country where you make a living by factoring or by doing, you know, the kinds of math that we teach, those are now no longer on the human side of the performance equation. So the idea that the line moves over time, that the division of labor changes in such a way that machines take over some of what people have been doing in the educational system has to change in response, is not a new one. And people, you know, not just me, but many people, including a lot of people in math education, have been screaming for decades that we need to stop teaching wrote manipulations that are, are always the part of, of the machine that nobody makes their living on doing in the workplace. What you do need is estimation. So you need to understand arithmetic just well enough that if the power goes out and you’re working for McDonald’s and somebody gives you a $5 bill and says, give me $50 in change, that you realize that something is going wrong there, even though the machine isn’t there to tell you No, no, no. It’s actually $2 and 20 cents in change. So I think that, that we have to say, just as in math, we say, well, it’s really estimation of the kinds of al algebraic and arithmetic manipulations that are now done by machines, With ai, We need to say, well, it’s really narrative mm-hmm. , or it’s really, um, creative exposition from a, from a descriptive core that that distinguishes what people can add. But that isn’t the debate that we’re having Melissa Warr: . Yeah. It’s, it’s interesting. So last, last week I went to a, a faculty development thing about, you know, all this chat GPT3 and, and everything. And you know, I put some of my student reflection prompts in cuz I like, well, as long as they have to describe personal experience, the AI won’t be able to do it. And so I I have this little reflection question about how they experience social learning theories in their teamwork. And a AI put, I mean, I I could spot it cuz I know the students and I know their teams. Um, but you know, it, it claimed to have personal experience with these certain theories and, and, and what it did. Um, and so it got me to thinking, you know, I sometimes I’m like, well, if a computer can do it, why would we ever teach a child to do it or anyone to do it? But there is a foundation of knowledge that, you know, I wanna make sure they understand these theories and how they apply so that they can think of new ideas that are based on those theories. Do you think we need to be able to do some of that foundational stuff even if AI or computer or calculator can do it? Chris Dede: Well, absolutely. And and the distinction that I would make is between arithmetic manipulations and the number line. Both of them you can look up on the internet, both of them seem to be relatively simple ideas, but the number line is in fact the foundation for a lot of higher kinds of mathematics. And really understanding the number line is, is something that people absolutely need to do, whether or not they get beyond estimation in terms of multiplication. Uh, an another example would be, um, the periodic table in chemistry. Should you memorize that calcium is number whatever element and how many electrons it has? Well, no, you shouldn’t any more than you should now be a ma uh, memorizing my favorite example of bad curriculum, which is learning the capital of every state. Um, because it’s not foundational knowledge, it’s just factoids. Now, if, if instead, but, but I’ll, I’ll just finish. If instead you ask why is the capital located where it is? That’s a deep question. If you ask in the periodic table, why did the elements above a certain point become radioactive? That’s a deep question. So it’s, it’s understanding which knowledge is foundational and leads to other knowledge and which knowledge is just factoids of different kinds. Melissa Warr: I mean, I I was just thinking with, with the capitals though, if I’m reading a news story and it says something happened in Salt Lake City, and I know that’s the capital of Utah, that gives me additional context. But if I don’t know that I don’t know to look it up, I I don’t, you know what I mean? Isn’t it good to have some of that basic context in there? Chris Dede: Um, well, yes and no. I mean, um, how many capitals are there across the world now and how much time in the curriculum should we spend with people memorizing them? Because it, it’s, it’s no longer the case that it’s the things that affect your life are only the things that happen within the United States or even within the neighboring states that surround you. Uh, were so richly interdependent now that it, it really matters what the capital of a province in China happens to be. So at, at some point you have to say, if I can look it up quickly, I don’t need to memorize it. Punya Mishra: Yeah, I mean, and that, that’s gonna be a fundamental tension that’s always been the case in education. Um, and I think that will always be the case, but I think, um, that, you know, you can, you can sort of imagine that there would be an intelligent assistant with you that if that is of important, then it would mention the fact that Salt Lake City is the capital. You know what I mean? So the, you know, we are thinking in terms of tools, which we have so far, but there might be other ones. Um, I wanna transition a little bit, Chris, because one of the themes that we touch on in these conversations is around the issues of creativity. Um, and technology truly in many ways has been this, like, in whether you look at it in terms of creative, creative output in the various kinds of media and things that we can create, but also in how we can disseminate the role of gatekeepers in, you know, and all of that, right? So I wonder what you think about the role of ai, um, in human creativity and sp particularly I’m looking at, you know, what are like little baby tools now, but which are just going to grow things like Dall-E or, you know, mid journey or whatever, a stable diffusion and so on. Uh, what do you think of like, what role these tools can play in, in the human creative process? Chris Dede: Well, I think it’s analogous in a way to the writing example. So let’s, let’s say that I want to create a picture and I want the border of the picture to be this very beautiful flower, uh, repeated, uh, 150 times, which is how many it takes to make a complete border around the size of the canvas. Um, for the artist to paint that identical thing 150 times is just le laborious. And having a program where you do one and you say, now repeat 149 times around the border, bang, it’s done. That’s great. That’s like the descriptive basis for writing the narrative story. But if you then say, um, I wanna vary the flower in different ways that reflect different conceptions of beauty across culture and across history, I am not so sure that Dall-E could really pull that off in the way that a, a painter who had some knowledge of the history of art and who had some knowledge of what’s considered beautiful in different cultures could do. I think if you were painting children and you said, don’t give me 20 versions of the same child, but give me 20 different personalities of children that are expressed in, in their posture and their face again, I don’t think Dall-E can do that now and maybe not for a long time compared to human beings and their ability to do that. So can, can Dall-E produce something as it does for some of our silver lining for learning graphics? Right. That looks pretty good. Yeah, it can And, and should Dall-E be doing that as opposed to Punya having to do that when Punya has many other things to do? Yeah, I think it probably should. But if, if you want to go deeper yeah. Into something that’s going to be seen like Van Gough’s, you know, work or monk’s work that’s so evocative for people, they’re there. I think it’s more like, I forget which Star Trek episode it is, but McCoy and Spock are in a museum and they’re standing in front of this really beautiful natural painting of, of a canyon or a mountain or something, and, and there’s awe and wonder that the painting creates. And McCoy is all excited and turns to Spock and says, what do you think the title is? And Spock says, geologic formation obscured by life forms . You know, that’s the difference between AI and, and the human creator. Punya Mishra: Yeah. So I I will, I will not take it as an insult that you did not compare my, you know, images I create for silver lining with Van Gogh. That’s okay. . No, I’m just kidding. Um, so, but I think it brings up a really important point here, which is that, which I think, you know, some people do talk about, but I don’t think we pay enough attention to, so for instance, if you go to Dall-E or any of these, and you type in the word doctor, the chances are you are gonna get white men as doctors, and you are going to, even though your neighbor of doctor most probably is Dr. Vidya Saga from India, you know, , it’ll still give you white men. It’ll clearly not give you enough women, it’ll not give you, um, women of color for sure. Um, and so I think those, there is this issue of what data these things have been trained on and how much of it are coming from what are sort of the, the weird nations, right? The western in, you know, industrialized and so on countries. And I think that’s a very important issue when you think about particularly art and the creative outputs which are so deeply culturally resonant. Chris Dede: Yeah, I, I couldn’t agree with you more. And in fact, I co-authored an article with one of my former doctoral students, uh, ed Deley, who works for et t s on that was published in AI in Society, and it was on the five ways that bias can be introduced into ai, um, data. And, um, those include not only a biased algorithm, which is what people spend a lot of time thinking about, but a dataset that complain that contains subtly implicit forms of bias, which is the point that you’re bringing up. And then, um, the kind of structure in the bias structure into which recommendations go. Hmm. So, um, there, there are many forms of bias that can creep into AI in the same way, frankly, that there are many forms of bias that are part of innate human decision making and, um, understanding, uh, that challenge and not just saying, well, if it’s on the internet and you average across everything, you’re gonna come out with an unbiased thing. Nothing could be farther from the truth. You’re much more likely to come out with a biased thing averaging across the internet. Punya Mishra: So, um, I know we are almost at like 45 minutes here, so I wanna transition a little bit to the work that you’re doing at the, the NSF funded, uh, center Institute for Adult Learning and Online Education in particular. Again, I know you have done a lot of work on sort of how to prepare, how, you know, the, the nature of adult learning, preparing for a changing world of work. So what are some changes that you think we need to be thinking about? Um, and of course, feel free to make a plug for the work that your, uh, institute is doing, uh, because I’m sure our listeners and readers would be excited to, uh, learn more about that. Chris Dede: Well, I think the work that AI Allo a l o e, which is the, the acronym for the institute is doing is, is quite interesting and speaks to many of the kinds of themes that we’ve been discussing because the, the core work that the institute has started with is from Oke goal at Georgia Tech University, and it’s about building intelligent assistance for university professors. So I’ve been talking about, well, you know, reporters may have an AI partner and doctors may have an AI partner. Well, I may have an AI partners quite soon. And necessarily those are quite narrow because AI is not a generalized intelligence, it’s a very narrow and specific kind of intelligence. So Ashoke has built question answering, uh, instructional assistants, tutoring, instructional assistants, a library instructional assistant, a laboratory social instructional assistant that helps students connect with other students in a big course that might be learning partners for them. And of course, all of this is empowered by online learning because machine learning requires lots and lots of data, and online learning collects much richer data sets than is readily possible with face-to-face learning. So suppose that I’ve got all those assistants sitting around me a couple years from now, I can easily be de-skilled by those assistants where they’re doing much of the work and I’m in a sense working for them. So the question answering assistant says, I don’t know the answer to this one, you do it, or I can upskill. And if I upskill to more deeply understand and personalize learning from my students, students from different cultures, students with different kinds of life challenges and so on, then I’m getting ia, then we’re able to do more collectively than I could without the assistance. And I think that that’s, that’s going to be an interesting contribution of this institute that I’m part of, is building these assistance, putting them into the crucible of practice and seeing the extent to which people can and do upskill or whether they just let themselves be de-skill. And, uh, I think that that, that is a challenge that, um, we face as faculty. Now the other thing that I’m particularly interested in that I think is a frontier for the institute that I’m part of that I’m hoping we get into is how we assess, because we talked about the fact that psychometric tests are really on the AI side of the equation. So performance simulations in contrast are very much on the human side. And I, I do some work with immersion, full disclosure, which is a digital puppeteering company that gives people a flight simulator for human skills where you can practice negotiation in virtual low stake settings before you go into high stakes negotiation on Valentine’s Day, or, or when you’re, you know, trying to get a promotion or something. And, um, AI works at the front end of those systems because it creates a much richer context that’s evocative of those skills and authentic in terms of the settings in which you utilize those skills like a pediatrician learning to, um, elicit knowledge from a young child who’s feeling ill, but on the back end it’s got machine learning and all this rich data flowing into it from, uh, human behavior second by second within the simulation that then can be feedback to the coach and feedback to the intelligent coaching assistant, both as a learning mechanism, but ultimately as an assessment mechanism. So somebody says, I’m a skilled negotiator, I should get this job. Okay, well, let us escort you through these three simulations and let’s see how you do. So I think that there’s promise in, in instruction, which is the things that can be taught, but I also think there’s promise in the things that need to be learned, but can’t be taught like creativity or like leadership where if you reduce them to a recipe like inquiry, then then you’re not learning the real skill. Punya Mishra: Kris, do you wanna take a break to just get a glass of water or something? Chris Dede: No, I’m okay. I’ve got, I’ve got some stuff here. Okay. I, I will have to disappear in about 10 minutes. Punya Mishra: Yeah, we’ll, no, we are almost towards the end. I think the point I think that I want to sort of build on what you just said, which is I think that GPT, these kind of large language models, when you’re throwing them at the web overall, you are getting a certain kind of behavior and stuff that’s emerging in some ways, the real future, at least in education for me, is much more domain specific, like models which understand chemistry or mathematics, and then can really become powerful tutors because I’ve been playing with G P T trying to see if it can tutor me about problems in mathematics. And it’s interesting how quickly it’s, you sort of hit a wall, uh, apart from basically flunks basic arithmetic every time, which is kind of funny, um, that it does that. But I think there’s domain specific, uh, because there is so much nuance and richness there that when we are throwing it across the whole web in some way, we are gaining some stuff, but we are also losing a lot in terms of precision. Uh, and, and which was I think what’s gonna be more useful in terms of intelligent tutors and so on. Chris Dede: Yeah, I, I would agree. But I, I think the qualification that I would put in is a distinction between the hard sciences and the soft sciences, if you will. So if you, if you ask, um, an AI to explain something in the hard sciences, like, you know, what are, what are the different cooling mechanisms, if you start with hot water or with cold water, you’re gonna get a, a probably a really good explanation, maybe a better explanation than the typical high school science teacher. Um, on the other hand, if you ask a question about human behavior, you know, how can people hold contradictory beliefs simultaneously and act on one or act on the other without appear appearing to notice a contradiction between them. AI is at sea in something like that, and that’s because much of social science is also at sea in terms of something like that. So we have to be very careful between the kind of false precision that AI can give in soft sciences versus the true precision that it probably can give in the harder sciences. Melissa Warr: Yeah. So, um, I think we have, we have one more question. We, we were wondering, um, so predicting in ai, you know, we know it changes and grows in every day. Um, do you think we should be watching out for these changes in trends or is there any current trends that you think are really important in the current context? Chris Dede: Well, I, I d I, um, was quite active in futures research in the early part of my career. And, um, a lot of that is based on trends or on predictable discontinuities. Like you can say, if there is a pandemic, then these are the things that are likely to happen, even though, uh, the pandemic isn’t a trend. But I was always mindful of Alan K’s saying that the best way to predict the future is to invent it. And I think that the challenge that we face with saying, well, AI predicts this and AI predicts that, is that it’s, it again, it’s like the terminator, as I said, science fiction has really confronted a lot of these things. The central message of the Terminator is that there isn’t really fate, there’s no fate, but what we make, we can, um, change our destiny and change our future even though it’s not easy. And, uh, the, the same way we need to be very careful about AI saying, well, I’ve studied macro history and these are the things that are going to happen. Well, yeah, they’re probably the things that are gonna happen if we drift into them or if we believe in them and stop trying. But, um, the, the human spirit, you know, of, of attempting to overcome what the trends are creating is the subject of a lot of our hero stories. And, uh, men and women who are heroes, um, step outside of the, the trends and the predictions and say, I’m gonna do something that appears to be impossible because I believe that it’s really important and, and some of the time they succeed. Punya Mishra: That’s a wonderful positive note on that. One of the things that I often, I read this somewhere, which is that, you know, most of science fiction is about like, you know, time travel stories are about like going back and like killing baby Hitler or something like that, and you know, or doing something which changes the present. And at some level, we are all time travelers. We are all moving into the future. So I think what you are saying is that if you really want to influence the future, do something now to fix it, rather than saying, can I build a machine that goes back in the past? Chris Dede: Exactly. Exactly. Yeah. Punya Mishra: So Chris, again, thank you so much for joining us. I know you have incredibly busy schedule, so appreciate that. Is there anything that you think that we could have asked you or we should have talked about that we didn’t cover that you’d like to add before we say goodbye? Chris Dede: Well, I, I just think that, that understanding IA is really important because we need to change education to maximize the human side of, of what we prepare people for and to prepare people to, um, use AI as a servant as opposed to making AI their master. And, um, I’ve learned a lot in the course of the interview. It’s always fun to, to kick these ideas around. And thank you again for doing tech trends and for informing the field with it. Punya Mishra: Thank you for this interview. Thank you, Chris. Thank you. All right, let me pause the recording.