The Absurd One-Sidedness of the Ethics of AI Debate: A rant

by | Saturday, February 03, 2024

It seems no conversation about AI and education is complete without discussing the importance of the ethical use of the technology. There are numerous reports and academic articles about it (this and this and this … I could go on and on).

There is, however, one aspect of this discussion that really pisses me off.

Don’t get me wrong. I think considering the ethical use of AI (and other technologies/media) is incredibly important. And to be clear these efforts reflect a genuine and well intentioned recognition of the need to establish ethical guidelines to ensure the responsible integration of AI in education including educating students about these issues. Digital citizenship is no longer constrained to the digital world (and one can argue that it never really was). Actions in the digital work have real consequences and it is important we, as educators, take these issues seriously.

My annoyance is with the absurd one-sidededness of the whole discussion.

Consider this scenario. If, in my role as a researcher, I have to interview just one person for a research study I have to have my research plan approved by my university’s institutional review board to ensure that my research will cause no harm to the interviewee (or anybody else for that matter).

Compare this to the fact that Meta conducted a secret mood-manipulation study (back in 2012) where they deliberately tampered with users’ news feeds to study its impact on people’s emotional states. All this without seeking any consent from the participants.

And this is not an isolated example and Meta is not the only culprit in this regard.

Whether Meta, OpenAI, Microsoft or Google, these companies act with impunity, prioritizing growth and profit over any ethical considerations or any concern for public welfare. These companies at the forefront of AI development, neglect the areas of greatest public or epistemological importance, lacking transparency and caution in their pursuit of technological advancement and market dominance.

So back in November 2022 OpenAI, without any guardrails, essentially conducted a massive social experiment at a global scale. With no care of concern of what that would mean.

And now we have to deal with deep-fakes, misinformation, and the rise of AI generated garbage on the web, and god only knows what more. (Again topics I have written about quite a bit on this website.)

All this done with no broader public discussion. With no care for what it would mean. First mover advantage was all that mattered. Nothing else.

And it has been wildly successful. ChatGPT (clearly that name was NOT market-tested) is maybe the biggest “brand” in the world today.

And I can see a few years from now, Sam Altman making a statement like Zuckerberg did last week, apologizing for the pernicious impact a technology his company created had on people and the world at large. (And, I must add that despite seeming off-the cuff and from the heart, Zuckerberg’s comments, were quite definitely premeditated. How else would you explain the fact Meta’s marketing team, within minutes of his speaking, shared his comments with tech journalists and on social media.)

I have spoken about how whether or not you as an educator use social media for teaching, social media and its pernicious impact on the mental health of youth is now your problem to deal with. And now we have to deal with AI influencers!

We have seen this movie before.

As Adrienne LaFrance wrote in recently, in The Atlantic, in an article titled: The Rise of Techno-Authoritarianism

To worship at the altar of mega-scale and to convince yourself that you should be the one making world-historic decisions on behalf of a global citizenry that did not elect you and may not share your values or lack thereof, you have to dispense with numerous inconveniences—humility and nuance among them. Many titans of Silicon Valley have made these trade-offs repeatedly. YouTube (owned by Google), Instagram (owned by Meta), and Twitter (which Elon Musk insists on calling X) have been as damaging to individual rights, civil society, and global democracy as Facebook was and is. Considering the way that generative AI is now being developed throughout Silicon Valley, we should brace for that damage to be multiplied many times over in the years ahead.

So yes, we (as educators) should debate, discuss, and promote the ethical use of AI. We should also create policies that help us navigate these messy times. I just wish that the people running these companies did the same before imposing their ideas on all of us.

End of rant. Thanks for listening.

A few randomly selected blog posts…

Vikram OR Vetaal, A Halloween Story (co-authored with AI)

Vikram OR Vetaal, A Halloween Story (co-authored with AI)

A few weeks back, Sean Leahy – friend, tech aficionado, futurist, and the co-host of the Learning Futures Podcast – reached out to me via email with an intriguing proposal. He was playing with the concept of harnessing generative AI to craft Halloween stories. The aim...

McLuhan on Silver Lining for Learning (5/3)

McLuhan on Silver Lining for Learning (5/3)

This is the fifth of what was supposed to be a three post-series about how media influence our thinking. The first post, uses the invention of writing and print to unpack the meaning of McLuhan’s statement, “The medium is the message.” The second post, focuses on a...

Cybersecurity & the Future of Education

Cybersecurity & the Future of Education

I was recently interviewed by David W. Schropfer for his DIY Cyber Guy podcast. David is an expert on cybersecurity and, and that is the focus of his podcast. I am clearly not an cybersecurity expert, so I was somewhat surprised at being invited to his show. What...

New ambigram: Nihal

My friend, Hartosh (I had written previously about his mathematical novel here) and his wife Pam, recently had a baby boy. This ambigram is of his name: Nihal Enjoy.

SITE 2008, Google & Creativity

At SITE 2008 Mike DeSchryver and I presented a paper titled Pre-Service teachers and the web: Does access to the Web enhance creative thinking about teaching. Abstract: This study examined teacher creativity and its relationship with emerging technologies. Eight...

Teaching an old dog new tricks

Teaching an old dog new tricks

I have been playing with Photoshop Beta, a version of Photoshop with a range of AI-powered tools that let you add, extend, or remove content from your images using simple text prompts. This is similar to Adobe Firefly, a web-based image manipulation / generation tool,...

Generative AI is WEIRD!

Generative AI is WEIRD!

Note: This blog post was almost entirely written by ChagGPT based on an analysis of a set of images I had uploaded onto it. The image above (Weird AI) is an original typographic design created by me. The background sky was created by Adobe Firefly. To give some...

TPACK Newsletter #27, March 2016

TPACK Newsletter #27, March 2016

TPACK Newsletter, Issue #27: March 2016 Special Spring 2016 Conference Issue Below please find a listing of TPACK-related papers/sessions that will be presented at the SITE conference in March in Savannah, Georgia; at the AERA annual meeting in April in Washington,...

TPACK Handbook, 3rd Edition: Call for proposals

TPACK Handbook, 3rd Edition: Call for proposals

I have been an academic for almost a quarter of a century (longer, if you include my years in graduate school), and it is a bit humbling that the work for which I (and my partner in crime, Matt Koehler) are best known for is the creation of a Venn diagram—which, when...

1 Comment

  1. Jaciara Carvalho

    Punya, thank you! You have illuminated a highly educational point in this debate that I will take to discuss with my students. As researchers (you, me, and the students), we conduct research with a very small number of people compared to what BigTech does. We seek to anticipate potential risks to research participants and inform them of these risks (in addition to being involved in the research, of course). Our studies require approval from an Ethics Committee. But BigTechs are out there all the time, doing research and experiments for decades with us without our knowledge, without any organization monitoring this on a global scale, causing changes in our ways of being, understanding, and living collectively. CEOs and investors multiply their wealth at our expense (often through manipulation). And, in the end, many students dream of being one more like them. What other dreams as attractive as this do we need to promote in our students?


Submit a Comment

Your email address will not be published. Required fields are marked *