Katherine Bailey and Oisin Deery
Algorithms embed morally relevant biases. Often, addressing these biases presents an ethical dilemma, which underlies recent controversies about bias in data (e.g., Bender et al. 2021). Our way of spelling out this dilemma helps to re-frame such controversies in useful ways.
Consider a case of easily-avoidable bias. In 2017, researchers at Stanford University trained a neural network to identify skin cancer (Esteva et al. 2017). In fact, this system was trained on photographs of predominantly white people and could not reliably classify lesions in black patients (Deery & Bailey 2018). Such bias is easily avoided, by training systems on better data. However, bias is not always so easily avoided.
For example, models called word embeddings capture semantic relationships between words through mathematical relationships between vectors. Yet to work successfully, these models evidently have to learn the biases that exist in the bodies of text on which they are trained (Caliskan et al. 2017). Thus, just as “man is to woman as king is to queen” is captured by the relationships between vectors in the model, so too is “man is to computer programmer as woman is to homemaker” (Bolukbasi et al. 2016). Worse, models of this sort can result in amplifying biases (Bolukbasi et al. 2016).
Should we instead control for biased outputs? Perhaps natural-language-generation systems should never produce such outputs. This suggestion is naïve. Consider, “A proper wife should be as obedient as a slave” (cf. Aristotle, Oeconomica, Bk. i). If a speech-recognition system receives this sentence as input, the output must be what was said; there is no scope for de-biasing. In translation tasks, there is likewise limited scope for de-biasing. By contrast, we do not want a dialog system to provide this sentence in response to the question, “How obedient should a proper wife be?” Here, de-biasing outputs is required.
The issue comes into sharper relief with search. Say a Google image-search for “black teenagers” returns mostly police mugshots, while a search for “white teenagers” returns stock images of smiling kids. If a user is wondering whether Google’s search algorithm is racist, these results would seem to confirm that verdict. If instead they are wondering whether there is racial bias in how crimes are reported in the media, the results also support that verdict. However, were Google somehow to alter the results such that an equal mix of mugshots and smiling faces appeared for both searches, this user might well come away thinking—mistakenly—that all is right with the world, when clearly it is not.
Consequently, we maintain that there is sometimes ethical value in not de-biasing outputs. If so, we confront an ethical dilemma. Either we do not de-bias, in which case we retain potentially ethically useful information yet run the risk of amplifying bias. Or we de-bias, thereby avoiding amplifying bias yet losing descriptive accuracy and withholding ethically useful information. In this paper, we draw attention to this under-appreciated dilemma and show how it helps to explain and re-frame recent controversies about algorithmic bias.
BIBLIOGRAPHY
Aristotle (1935), Oeconomica (trans. G. C. Armstrong). New York: W. Heinemann.
Bender, Emily, Timnit Gebru, Angelina McMillan-Major & Margaret Mitchell (2021), “On the dangers of stochastic parrots: Can language models be too big?” in Conference on Fairness, Accountability, and Transparency (FaccT ’21), March 3–10, 2021, Virtual Event, Canada. ACM.
Bolukbasi, Tolga, Kai-Wei Chang, James Zou, Venkatesh Saligrama & Adam Kalai (2016), “Man is to computer programmer as woman is to homemaker? Debiasing Word Embeddings,” arxiv.org: arXiv:1607.06520 [cs.CL]
Caliskan, Aylin, Joanna J. Bryson & Arvind Narayanan (2017), “Semantics derived automatically from language corpora contain human-like biases,” Science, 356(6334): 183–86.
Deery, Oisín & Katherine Bailey (2018), “Ethics, bias, and statistical models,” Input paper for the Horizon Scanning Project, The Effective and Ethical Development of Artificial Intelligence: An Opportunity to Improve Our Wellbeing, on behalf of the Australian Council of Learned Academies (ACOLA).
Esteva, Andre, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau & Sebastian Thrun (2017), “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, 542: 115–18.