Algorithmic Microaggressions

Emma McClure and Benjamin Wald

We argue that machine learning algorithms can inflict microaggressions on minority communities and that recognizing these harms as instances of microaggressions helps us avoid solutions that simply recreate the same kinds of harms. The last decade has marked huge advances in machine learning and the ever increasing centrality of algorithms to our daily lives. Drawing on vast training data sets, these algorithms can produce uncannily accurate predictions on a range of subjects, yet it has also become increasingly apparent that these systems can inherit the human biases present in this training data. Some systems have a large and obvious discriminatory impact, such as the COMPAS system used to determine parole decisions that was biased against Black people. Other harms, however, arise from the accumulation of smaller and individually less impactful instances of bias, which nonetheless as a whole confront minority groups with prejudicial stereotypes and attitudes. The concept of microaggressions was developed to address exactly this sort of harm, and we argue that the concept is still applicable and illuminating when the perpetrator of the microaggressions is software rather than human.

As a case study, we look at the problems faced by Google’s autocomplete prediction, and the insufficiency of their solutions. Safiya Umoja Noble’s 2018 book, Algorithms of Oppression, brought the problem of autocomplete to the public’s attention. The cover features the query “Why are black women so…” with the predictions “angry” “loud” “lazy.” Noble’s book effected change, and quickly: by 2019, Google had completely revamped its autocomplete policies. Yahoo still to this day autocompletes “why are black people” with terms like “violent” and “inferior” and “why are women” with “bitches” and “crazy,” but Google has made significant strides by allowing users to report “inappropriate predictions” for being “hateful against groups.”

However, we argue that Google’s policy change has had troubling unintended consequences. The ability to report inappropriate predictions has created some searches with extremely restricted autocomplete options: “why are Jews” returns only three results (“kosher” “the chosen people” “circumcised”), instead of the usual ten, while “why are black women” returns no results at all. Looking at these outcomes through the lens of microaggressions reveals that Google’s fixes have not addressed the underlying issue. In removing explicitly racist, misogynistic, and anti-Semitic content, Google hasn’t canceled microaggressive implicatures. Having fewer search results or turning off autocomplete entirely also sends a humiliating message: “The searches on this topic are so vitriolic that we can’t even list ten suggestions that aren’t derogatory.” Furthermore, given the centrality of Google to knowledge acquisition, we worry about the epistemic and temporal harms that accrue when users are impeded in accessing counter-stereotypical information.
The case study of autocomplete demonstrates our core claim that microaggressions constitute a distinct form of algorithmic bias and that identifying them as such is key to effectively addressing the problem. Google has a responsibility to make information freely available, without exposing users to degradation. To fulfill its duties to marginalized groups, Google must abandon the fiction of neutral prediction, and instead embrace suggestion.

%d bloggers like this: