{"id":12432,"date":"2024-02-03T15:34:43","date_gmt":"2024-02-03T22:34:43","guid":{"rendered":"https:\/\/punyamishra.com\/?p=12432"},"modified":"2024-02-06T12:45:10","modified_gmt":"2024-02-06T19:45:10","slug":"the-absurd-one-sidedness-of-the-ethics-of-ai-debate-a-mini-rant","status":"publish","type":"post","link":"https:\/\/punyamishra.com\/2024\/02\/03\/the-absurd-one-sidedness-of-the-ethics-of-ai-debate-a-mini-rant\/","title":{"rendered":"The Absurd One-Sidedness of the Ethics of AI Debate: A rant"},"content":{"rendered":"\n
It seems no conversation about AI and education is complete without discussing the importance of the ethical use of the technology. There are numerous reports and academic articles about it (this<\/a> and this<\/a> and this<\/a> … I could go on and on). <\/p>\n\n\n\n There is, however, one aspect of this discussion that really pisses me off. <\/p>\n\n\n\n Don’t get me wrong. I think considering the ethical use of AI (and other technologies\/media) is incredibly important. And to be clear these efforts reflect a genuine and well intentioned recognition of the need to establish ethical guidelines to ensure the responsible integration of AI in education including educating students about these issues. Digital citizenship is no longer constrained to the digital world (and one can argue that it never really was). Actions in the digital work have real consequences and it is important we, as educators, take these issues seriously. <\/p>\n\n\n\n My annoyance is with the absurd one-sidededness of the whole discussion. <\/p>\n\n\n\n Consider this scenario. If, in my role as a researcher, I have to interview just one person for a research study I have to have my research plan approved by my university’s institutional review board to ensure that my research will cause no harm to the interviewee (or anybody else for that matter). <\/p>\n\n\n\n Compare this to the fact that Meta conducted a secret mood-manipulation study<\/a> (back in 2012) where they deliberately tampered with users’ news feeds to study its impact on people’s emotional states. All this without seeking any consent from the participants. <\/p>\n\n\n\n And this is not an isolated example and Meta is not the only culprit in this regard. <\/p>\n\n\n\n Whether Meta, OpenAI, Microsoft or Google, these companies act with impunity, prioritizing growth and profit over any ethical considerations or any concern for public welfare. These companies at the forefront of AI development, neglect the areas of greatest public or epistemological importance, lacking transparency and caution in their pursuit of technological advancement and market dominance.<\/p>\n\n\n\n So back in November 2022 OpenAI, without any guardrails, essentially conducted a massive social experiment at a global scale. With no care of concern of what that would mean. <\/p>\n\n\n\n And now we have to deal with deep-fakes, misinformation<\/a>, and the rise of AI generated garbage on the web, and god only knows what more. (Again topics I have written about quite a bit on this website.) <\/p>\n\n\n\n All this done with no broader public discussion. With no care for what it would mean. First mover advantage was all that mattered. Nothing else. <\/p>\n\n\n\n And it has been wildly successful. ChatGPT (clearly that name was NOT market-tested) is maybe the biggest “brand” in the world today. <\/p>\n\n\n\n And I can see a few years from now, Sam Altman making a statement like Zuckerberg did last week, apologizing for the pernicious impact a technology his company created had on people and the world at large. (And, I must add that despite seeming off-the cuff and from the heart, Zuckerberg’s comments, were quite definitely premeditated. How else would you explain the fact Meta’s marketing team, within minutes of his speaking, shared his comments with tech journalists and on social media.) <\/p>\n\n\n\n I have spoken about how whether or not you as an educator use social media for teaching, social media and its pernicious impact on the mental health of youth is now your problem to deal with. And now we have to deal with AI influencers<\/a>! <\/p>\n\n\n\n We have seen this movie before. <\/p>\n\n\n\n