Ting-An Lin and Po-Hsuan Cameron Chen
Ethical considerations regarding AI have been recognized as an urgent issue and attract attempts to address it. Thus far, most discussions in AI ethics center around the unequal distributions of resources, treat the bias of AI systems as the primary source of the problem, and try to solve the problems mainly through technical methods. However, noticing that AI development is situated in a world with plenty of social injustices, we argue that these discussions not only obscure the nature of AI ethics but risk perpetuating the existing social inequalities.
This paper has two main objectives. First, we argue that the existing discussions are insufficient in answering the fundamental questions regarding AI ethics: (1) “What kind of problems should be addressed?” (2) “Where does the problem locate?” and (3) “How can the problems be solved?” Second, we discuss how the analysis on AI ethics could be revised. We argue that a philosophical conceptual framework named structural injustice is suitable for analyzing AI ethics and that it offers more comprehensive answers to the aforementioned three questions.
Using AI’s application in healthcare as an example, we argue that structural injustice (SI) provides a suitable moral framework for analyzing the ethical concerns of AI. The SI framework captures a type of wrongdoings that cannot be reduced to individual wrongdoings or repressive policies but rather need to be accounted for from a structural perspective. It reveals that while several actions might be morally neutral or even morally required when evaluating separately, when interacting together, they could constitute an unjust social structure that exposes some groups of people to unfair risk and confers privileges and power of dominance to other groups. By examining the process of healthcare AI development: from problem selection, data collection, algorithm development, to algorithm deployment, we discuss how the interactions between AI systems and existing social structure could exacerbate health inequalities.
Furthermore, we draw three implications from the SI analysis to answer the questions of AI ethics raised above. First, the ethical problems to be tackled go beyond resource distributions, and the power imbalances and hierarchies embedded in the developing processes should also be concerned. For example, we should examine: “Whose voice is included in AI development? How does AI development influence the power structure?” Second, the ethical problems do not lie in the design of AI systems but rather emerge from the interactions between AI and the existing social structure. Thus, the focus of AI ethics should expand from merely “designing a fair AI system” to “ensuring a just social structure, where the AI technique may be one of the constitutive factors.” Third, as these problems often go beyond engineering design, technology cannot be the full solution. To comprehensively address issues of AI ethics, diverse approaches examining social, political, and institutional orders should also be developed. We hope that the proposed framework can shed light on the nature of AI ethics and provide a theoretical foundation from which experts in different domains can work together to advance AI for better social justice.