Heather Stewart, Emily Cichocki, and Carolyn McLeod
This paper discusses how AI algorithms on social media can exacerbate epistemic injustice and accompanying epistemic distrust, further dividing people along lines of race, ethnicity, gender, class, and the like. It also reflects briefly on how to address this problem and how better, less divisive, AI algorithms could be part of the solution.
Where there is epistemic injustice, there is distrust in the capacity of members of socially marginalized groups to be knowers as a result of “negative identity prejudicial stereotypes” (Fricker 2007). This epistemic distrust narrows what more privileged people can know, resulting in their “situated ignorance” or gaps in their knowledge that exist because of their social position (Dotson 2011). This ignorance in turn breeds, or worsens, distrust between social groups.
Our central concern is with how AI algorithms can stoke the fires of this distrust and thus of epistemic injustice. While this problem arises in different contexts (e.g., in the public service sector, with its use of AI screening tools), we focus on social media. We do that for various reasons, including the fact that a rising number of people obtain the bulk of their news and other information through social media (Suciu 2019); that as a result of COVID-19, social media is becoming an even more central site of social dis/connection (Samet 2020); and that recent current events (e.g., the storming of the U.S. Capitol) have brought processes of information sharing on social media to the fore of popular discourse and concern (Crist 2021).
On sites such as Facebook and Twitter, algorithms do two things that are of concern to us: targeting and sorting. The former involves directing people’s attention to content that is likely to be of interest to them (e.g., based on their ‘like’ patterns). The latter—epistemic sorting—involves leading people into echo chambers as a result of compounded targeting; people end up in different epistemic words, where they develop enhanced trust in people who occupy the same epistemic world as they do and distrust in anyone, or anything, outside of that world (Nguyen 2020). Both algorithmic sorting and targeting impact what individual users see and do not see in their social media feeds (Stern 2021). As a result, they shape people’s informational and social landscapes substantially.
We contend that the algorithmic processes of targeting and sorting sow division and distrust among groups that have been traditionally divided, e.g., different classes or races of people. They can have this effect without people realizing what’s happening, as identity prejudicial stereotypes are subtly perpetuated within the information that people share or receive. The result is that AI algorithms contribute to an extant problem of epistemic injustice within social media and beyond it.
This paper is part of a larger research project on AI and social justice (especially epistemic justice). Our thinking has been influenced by our involvement with an interdisciplinary research group at our institution, focused on AI and social justice, which includes people from units such as philosophy, computer science, and information and media studies.