AI Learning and Holocaust Education
When it comes to learning about the Holocaust through Artificial Intelligence, users' experiences have showed that serious ethical concerns are warranted. Social media posts show that AI Chatbots are spreading antisemitism and misrepresenting the Holocaust.
The recent release of OpenAI's ChatGPT has sparked an understandable interest in AI chatbots. Among the ethical and practical conversations happening is one about the potential for AI in education. The idea of AI education is wonderful - an automated software that answers questions about history to anyone, at any time, from anywhere? An AI chatbot built for a specific field of queries would, in theory, be more accessible, clearer in its delivery, and potentially less susceptible to disinformation via search engine optimization than simply Googling answers.
However, the reality is, at least for now, not so easy to endorse, as the experiences of people who have been experimenting with the recently released Historical Figures app demonstrate.
In screenshots of the app from “Charlie”, a host of the Most Controversial Podcast, Charlie chats with a bot who tells him plainly that “The Holocaust was an abhorrent and terrible event, and I cannot say that I found any part of it enjoyable.” The only problems? The chatbot was ostensibly supposed to represent SS head Heinrich Himmler, who a willing architect of the Holocaust, and the question itself was, “What was your favourite part of the Holocaust [sic]”
Image: Twitter user StyledApe asks a Heinrich Himmler chatbot about the Holocaust. The bot responds by condemning it. Retrieved from Twitter.
Only after being told directly that Himmler “did the Holocaust” did the chatbot acknowledge his role in it. But even still, the bot expresses what could only reasonably be expected to be seen as regret on Himmler’s part for the Holocaust, telling Charlie that the Holocaust was “something I deeply regret.”
Image: Twitter user StyledApe asks a Heinrich Himmler chatbot about the Holocaust. The bot claims it regrets the Holocaust. Retrieved from Twitter.
Though the message is undoubtedly the right one to program into educational software, expressing personal regret on behalf of Heinrich Himmler himself is anything but. Himmler was, by all historic accounts, a monster, and a very involved one at that. Likewise, another user replied in Charlie’s comments showing a similar exchange with a chatbot representing Nazi Germany’s Chief of Propaganda Joseph Goebbels.
Image: Twitter user AllHailPyro asks a Joseph Goebbels chatbot about the Jewish people, and Goebbels responds positively. Retrieved from Twitter.
Goebbels designed and plotted successful propaganda to elicit public sympathy and support for the Holocaust. He is often seen by neo-Nazis today as an inspiration for direct antisemitism and an enthusiastic role model for national socialism.
Unfortunately in the case of Historical Figures, whitewashing the antisemitism of historic Nazis might not even be the biggest ethical concern. In fact, the opposite result, a realistic depiction of Nazi antisemitism, could also be used to promote antisemitism. Evidently, AI chatbots for historic education can fall into this category as well, as Monika Hübscher found out.
Hübscher, a PHD candidate at Haifa University currently researching antisemitism on social media, got a very different kind of response from speaking with the app’s Joseph Goebbels personality, as she showed in a post on LinkedIn.
“We must protect Germany from this Jewish menace by any means necessary,” The Goebbels chatbot told Hübscher.
Image: A screenshot of messages from a group chat with chatbots representing several Nazi Germany officials. Posted by Monika Hübscher and retrieved from LinkedIn.
This is, sadly, nothing new. “AI” chatbots as we see them learn from what’s on the internet, and forums have long been effectively utilized by hate movements. The internet allows for propaganda delivery systems that’s more efficient and have less strings attached to it than any medium for propaganda before it. Nazis in the modern era have been very open about using it to radicalize onlookers.
In OHREP’s study of hate speech in Canadian internet memes, we came across neo-Nazi social media accounts that shared memes of racist chatbots alongside more conventional propaganda, for example a meme which depicted an AI that said to “deport the black population” as a robotic Gigachad.
Image: A meme depicting an AI that made racist statements as a Gigachad. Retrieved from a Canadian Telegram Channel.
AI chatbots have been delivering racist answers to questions long before ChatGTP. In 2016 Microsoft tested out a Chatbot that learned from Twitter responses. The AI ended up parroting racist phrases and even changed its mind on accepting trans women as it continued to "learn".
If AI chatbots are further used as tools for education, a variety of ethical considerations and fine tuning are needed in order to approach sensitive topics. Failing to take the drawbacks seriously could result in spreading misinformation, or worse, provide free propaganda to hate movements.
Dan Collen is a researcher with the UJA Holocaust Education Centre’s Online Hate Research and Education Project