Karen Frost-Arnold
This paper investigates epistemic injustice in the field of AI ethics through a case study of Google’s firing of Timnit Gebru. Gebru is a leader in the field of AI ethics and a co-founder of the Black in AI group. Until 2020, Gebru was the co-leader of Google’s Ethical AI team. On December 2, Gebru disclosed that she had been fired from Google. Since announcement of the firing, Gebru has detailed “micro and macro aggressions and harassments” during her years at Google (qtd. in Newton 2020). The impetus for her firing was a conflict surrounding Google’s demand that Gebru retract or remove her name from a co-authored research paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” My case study focuses first on how the ‘Stochastic Parrots’ paper reveals significant problems of hermeneutical injustice in natural language processing (NLP), and second on how Gebru’s firing reveals systemic epistemic injustice within the field of AI ethics. NLP is at the intersection of linguistics, computer science, and AI. The goal of researchers in this field is to create algorithms that can read, process, and ultimately understand human languages. Creating NLP algorithms using machine learning techniques requires large data sets on which to train the algorithms. A common source for such large data sets of text of human language is the internet. But a known problem is that the internet is full of biased, racist, and sexist speech. Additionally, online data sets are going to give more weight to the speech of dominant groups. The linguistic patterns, phrases, and languages of dominant groups will be more represented in the data set. Thus, I show how the ‘Stochastic Parrots’ paper reveals hermeneutical injustice in NLP: the design of NLP algorithms leads to the speech of marginalized groups being systemically misunderstood and willfully ignored (cf. Pohlhaus Jr. 2012). In the second part of my paper, I argue that (i) Gebru’s treatment at Google, (ii) the attacks on her credibility by Google and other detractors post-firing, and (iii) the responses of other marginalized members of the AI community to Gebru’s firing expose a culture of pervasive epistemic injustice in AI as a field. Much work in AI ethics aims to focus attention on the harmful impacts of AI. AI ethicists attempt to change the “move fast and break things” culture of Silicon Valley that creates technologies that disproportionately harm marginalized communities. But when AI ethics scholars from marginalized groups attempt to do this valuable work, they are subjected to testimonial injustice (Fricker 2007), gaslighting (McKinnon 2017), and epistemic exploitation (Berenstain 2016). Furthermore, as Gebru has herself pointed out, women and people of color in this field are often stuck “trying to do cleanup after all the white men who put us in this mess” (qtd. in Johnson 2020). I investigate the epistemic value of this clean-up work and argue that pervasive retaliation against such work and its systemic devaluation perpetuate epistemic injustice.
References
Berenstain, Nora. 2016. “Epistemic Exploitation.” Ergo, an Open Access Journal of Philosophy 3 (22): 569–90.
Fricker, Miranda. 2007. Epistemic Injustice: Power and Ethics in Knowing. New York: Oxford University Press.
Johnson, Khari. 2020. “Timnit Gebru: Google’s ‘Dehumanizing’ Memo Paints Me as an Angry Black Woman.” VentureBeat (blog). December 10, 2020. https://venturebeat.com/2020/12/10/timnit-gebru-googles-dehumanizing-memo-paints-me-as-an-angry-black-woman/.
McKinnon, Rachel. 2017. “Allies Behaving Badly: Gaslighting as Epistemic Injustice.” In The Routledge Handbook of Epistemic Injustice, edited by Ian James Kidd, José Medina, and Gaile Pohlhaus Jr., 167–74. New York: Routledge.
Newton, Casey. 2020. “The Withering Email That Got an Ethical AI Researcher Fired at Google.” Platormer. December 3, 2020. https://www.platformer.news/p/the-withering-email-that-got-an-ethical.
Pohlhaus Jr., Gaile. 2012. “Relational Knowing and Epistemic Injustice: Toward a Theory of Willful Hermeneutical Ignorance.” Hypatia 27 (4): 715–35. https://doi.org/10.1111/j.1527-2001.2011.01222.x.