Michael Barnes
This paper has 4 main tasks: (1) to discuss the role of Artificial Intelligence (AI)—along with algorithms more broadly—in online radicalization; (2) to argue that technological solutions like better AI are inadequate; (3) to demonstrate that the preference for technological solutions reveals an ideology that erases the work of thousands of human content moderators; and (4) to analyze these issues as a problem of subordinating speech.
The horrific murders in Québec City, Christchurch, and elsewhere offer clear proof of the grave importance of addressing the worrying trend of online hate spilling into the ‘real’ world. We are quickly coming to understand that “self-radicalization,” abetted by online platforms, is a growing problem that demands our immediate attention. But we are only beginning to understand how and why this hate spread so rapidly on new communication channels. Emerging research makes it clear that online platforms like Facebook and YouTube play a vital role in accelerating the rise of extremism both at the individual and the societal level. By leading users down a path of radicalization, and then serving as an essential medium connecting formerly isolated extremists, Big Tech is clearly implicated in this problem. And, after years of public pressure, they are (finally) pledging to do better.
In this paper, I focus on one strategy Big Tech has chosen to address this issue: improved AI-driven algorithmic recommendation and moderation. While these algorithmic solutions have some potential to limit audience exposure to extremist content, I argue they are not the panacea they are occasionally presented as. I survey some technological challenges AI solutions face but more fundamentally I challenge the idea that this is mainly a technological problem, and so calls for a technological solution. I argue the preference for a technological solution instead serves an ideological function for Big Tech. That is, rather than being offered as a good-faith proposal, its primary purpose is to distract us from the real, avoidable, human harm these companies contribute to worldwide.
A compelling entry point for understanding these issues, I claim, rests in the under-told story of the thousands of human content moderators employed by the many technology companies who profit off user-generated content. As things stand, Big Tech rely on these moderators—often employed via third-party mediators—to sift through countless toxic posts every day. Most of these moderators are underpaid contractors, often overseas, suffering behind NDAs. Yet they are essential workers in the global information supply chain—and the harms they experience are some a full account of online hate must explore. And, as I will argue, the neglect they are shown in the public stances of Big Tech reveals an overall ideology where technological ‘progress’ is valued over human wellbeing.
Throughout, I analyze these issues as a problem of subordinating speech. This is apt because the internet is a medium of communication (and so these are speech harms), and algorithms themselves have an expressive function. The overall result is the fruitful combination of social philosophy of language and data ethics.