Alysha Kassam and Patricia Marino
This paper contributes to debates over equity in algorithmic contexts, with particular attention to structural theories of racism and the ways algorithms perpetuate inequalities through proxy discrimination. We argue 1) that the structural approach adds new support to recent theories of algorithmic discrimination that emphasize effects and harms over reasons and intentions; 2) that aiming for “fairness” in a form of parity — such as equality of metrics between individuals or groups — is insufficient in addressing relevant inequities and 3) that we should adopt a “social impact framework” that centers on whether the use of an algorithm perpetuates or mitigates existing social stratification.
Central in discussions of algorithmic discrimination is the fact that algorithms pick up on social patterns of bias and inequality in the data they are trained on, then perpetuate these patterns. For example, if a hiring algorithm looks at correlations of resume details with later success to predict future success, then in contexts of existing discrimination, that algorithm will downgrade resumes of people in marginalized groups. As is noted, this occurs even if the algorithm has no access to information about group membership: instead, the algorithm finds correlations with facially innocuous attributes. This has been called the “proxy problem” and the result “proxy discrimination” (Johnson 2020, Prince and Schwarcz 2020, Bent 2019).
Before the introduction of algorithms and influenced by US legal theory, anti-discrimination analysis often focused more on on reasons and intentions and less on effects and harms. But recent theorists argue that since proxy discrimination can be unintentional, we need new framings (Prince and Schwarcz 2020, Ajunwa 2019, Bent 2019). We argue that using structural theories of racism and oppression gives new and unifying reasons for moving in this direction.
Some attempts to define algorithmic fairness focus on parity or balance, aiming, for example, for equality among groups in false-positives or false negatives or ratios (Hellman 2019; see Huq 2019 and Bent 2019). We argue that the asymmetrical nature of inequity among groups shows that parity characterizations of fairness are insufficient in the relevant contexts.
Focusing on the criminal justice context, Aziz Huq (2019) argues that what matters is whether an algorithm perpetuates social stratification and whether the algorithm’s use results in a net cost or benefit for those in marginalized communities. Negative spillover effects are substantially greater for racialized minorities than for the racial majority; since the operation of criminal justice coercion generates asymmetrical harms, there will be a class of crimes for which a greater benefit will be required to achieve net positive effects for Black suspects.
While Huq suggests that his approach is particular to the criminal justice context, we argue that from the point of view of structural theories, which see racism and oppression in terms of unequal power structures that subordinate marginalized groups (Mills 2003, Lebron 2013), a “social impact” approach like Huq’s — emphasizing effects rather than causes, but also difference rather than parity — is necessary more globally.
Ajunwa, Ifeoma. “The Paradox of Automation as Anti-Bias Intervention.” Cardozo Law Review 41 (2019): 1671.
Bent, Jason R. “Is Algorithmic Affirmative Action Legal.” Georgetown Law Journal 108 (2019): 803.
Chander, Anupam. “The Racist Algorithm?.” Michigan Law Review 115, no. 6 (2017): 1023-1045.
Hellman, Deborah. “Measuring Algorithmic Fairness.” Virginia Law Review 106 (2020): 811. Huq, Aziz Z. “Racial Equity in Algorithmic Criminal Justice.” Duke Law Journal 68, no. 6
Johnson, G.M. Algorithmic Bias: on the Implicit Biases of Social Technology. Synthese (2020). https://doi-org.proxy.lib.uwaterloo.ca/10.1007/s11229-020-02696-y
Kleinberg, Jon, Jens Ludwig, Sendhil Mullainathan, and Cass R. Sunstein. “Discrimination in the Age of Algorithms.” Journal of Legal Analysis 10 (2018).
Lebron CJ . The Color of Our Shame: Race and Justice in Our Time (Oxford: Oxford University Press, 2013).
Mills, Charles W. “White Supremacy as Sociopolitical System: A Philosophical Perspective.” In Whiteout: The Continuing Significance of Racism, edited by Woody Doane and Eduardo Bonilla- Silva (New York, NY: Routledge, 2003).
Prince, Anya and Daniel Schwarcz. “Proxy Discrimination in the Age of Artificial Intelligence and Big Data.” Iowa Law Review 105, no. 3 (2020): 1257.