Hsiang-Yun Chen, Linus T. Huang, Tzu-Wei Hung, and Ying-Tung Lin
Explainable artificial intelligence (XAI) aims to help humans understand how artificial intelligence (AI) systems generate output and address its problems such as black box and algorithmic bias. While current XAI literature has acknowledged the importance of including the perspectives of stakeholders involved, many significant issues remain to be clarified. This paper zeroes in on the fundamental question of why and how to include which social group’s viewpoints in XAI. We argue that XAI presumes social contexts, so instead of emphasizing that incorporating stakeholders’ viewpoints into XAI helps people better understand AI, we articulate why considering a diversity of perspectives enables better XAI design. Specifically, we demonstrate how resources provided by feminist philosophy of science, standpoint epistemology, and epistemic injustice help to identify important problems that are often overlooked in context-sensitive XAI. We further outline possible solutions to these problems and show that feminist insights can support the pursuit of better AI and XAI.