Martin Miragoli and Daniela Rusu
Machine-learning algorithms play a fundamental role in our lives. Just think of the impact that tools such as the Google search engine have had in our everyday experience. In recent years, there has been an explosion of interest in themes at the intersection of AI and technology on one side, and feminism, race and identity theory on the other hand. For the first time, these studies have brought to the attention of the public the social risks involved in the use of AI. However, these works mainly focus on the mechanisms generating the wrong ─ namely, the bias often involved in the AI algorithms. This has clear practical implications, as it indicates what needs to be changed. If we want to bring about this change, however, it is crucial that knowledge of the cause of the injustice is accompanied by knowledge of how to resist it. This question is arguably more fundamental, as it informs us about the precautions that ought to be implemented for preventing the same injustice from being perpetrated. Still, this problem hasn’t been addressed in the literature yet.
In this paper, we aim to tackle the how-question by looking at the injustice and the way in which AI perpetrates it. To do so, we focus our attention on injustices of a specific epistemic kind (i.e., perpetrated against someone who is harmed in their capacity as a knower), and argue that there are two ways in which machine learning algorithms can be harmful in this sense. First, by flattening the quality of the epistemic goods of our community (e.g., by making available less true beliefs or knowledge). AI trained with large, unreviewed language models is at risk of inheriting the biased, sexist and racist language that is common in the online community at the expenses of the normative changes brought about by more nuanced, mindful, vocabulary of some minority groups (such as language experts, the BLM or MeToo movement, or specific etnich groups). Second, AI can be epistemically harmful by flattening the variety of the epistemic goods available. This is because the regional norms and vocabulary of minority groups that have little to no access to the online community are not represented in the language models deployed to train an AI.
Following Fricker (2007), we argue that because it does the former, machine learning AIs perpetrate a particular form of testimonial injustice ─ that is, it sometimes fails to attribute the right level of credibility to minority groups. Because it does the latter, it perpetrates a particular form of hermeneutical injustice ─ that is, it hinders minority groups’ understanding of fundamental areas of their social life. Finally, we argue that because it specifically affects populations rather than single individuals, the two forms of epistemic injustice at play must be understood as distinctively collective.