Os Keyes and Kathleen Creel
Feminist philosophy has a long history of descriptive and normative engagement with artificial intelligence (AI). In 1998, Alison Adam critiqued symbolic AI systems as purporting to represent a universal perspective, a Nagelian “view from nowhere”, while in fact presenting the perspectives of the predominantly white, middle class, male mathematicians who built them (Nagel, 1989; Adam, 1998). As Adam notes, representing these perspectives, and no others, was a choice (Adam, 1998). Likewise, contemporary machine learning systems are typically engineered to either reflect the perspectives of socially dominant groups, often thereby inscribing the biases held by these groups (Noble, 2018; Benjamin, 2019) or to re-present each person with their own perspective by “personalizing” recommendations or models. These, too, are choices.
As AI systems and concerns about them become widespread, it is vital to integrate feminist critiques of AI into sociotechnical practice. But the relation between theory and practice is bidirectional. In this paper, we ask not only what role feminist theory might play in understanding and shaping AI, but also whether algorithmic systems themselves might serve as useful sites in which to practice, prototype and explore feminist theory. In particular, we look at decision-making around AI as a possible site in which to test the robustness and consequences of feminist theories of perspective-taking.
Understanding that others are different from oneself is a minimal precondition for the possibility of political activity. One way to fail to meet this pre-condition is to see the other as identical to oneself, perhaps by imagining that all possible others share one’s own particular features in what Seyla Benhabib has called “substitutionalist universalism” (Benhabib, 1992). Feminist philosophers and their allies have critiqued the supposed objectivity of the “featureless observer” (Daston, 1992) who takes up the “view from nowhere” and with it the mantle of objectivity (Haraway, 1988; Kukla, 2006; Harding, 2015); and the universal knower whose perspective is often observed to coincide with that of the dominant group (Mills, 2007; Young, 2011; Khader, 2018). But to what extent we can exceed the minimal precondition by taking up and understanding the perspective of another is a contested question within feminist philosophy and political theory.
Perspective-taking matters. If the optimists are right, perspective-taking is a valuable activity that could combat epistemologies of ignorance (Mills, 2007; Alcoff, 2007), perhaps by encouraging ““world”-travelling” (Lugones, 1987; Bowman, 2020) or enabling an empathetic “imaginative capacity” (Paul, 2016; Langton, 2019; Toole, 2020). If the pessimists are right, the illusion that robust perspective-taking is possible leads to the dangerous belief that the perspective of marginalized others is no longer necessary, as members of dominant social groups can inhabit and speak for both perspectives (Young, 1994).
In this article, we present a series of speculative—but perfectly achievable—examples of algorithmic systems that model multiple perspectives. Our systems are designed not only to take into account feminist critiques of perspective taking, but also to actively engender space for collision, relation and what de Castillo describes as “radical friction” (de Castillo, 2018). We argue that creating and experimenting with such designs is a fruitful way to engage in a feminist “critical technical practice” (Agre, 1997) of artificial intelligence: one that allows us to both reshape the epistemic frames in which algorithmic systems are developed, and practically test and implement feminist theories of epistemic justice and moral relation.
https://www.dropbox.com/s/ere04o7bnmigxl6/FPQ_blind_abstract.pdf?dl=0