Artificial Intelligence, Medicine, and the Calcification of Epistemic Injustice

Mahi Hardalupas

Much has been written about how epistemic injustice is present in medicine and psychiatry (Carel & Kidd, 2014; Miller Tate, 2018). While some have discussed epistemic injustice and Artificial Intelligence (AI) (Chin-Yee & Upshur, 2019), there is a need to expand on this more to understand how and why AI systems contribute to epistemic injustice in medicine. In this paper, I argue that AI systems have a tendency for what I call epistemic calcification – the process by which knowledge systems “harden” and become set in a fixed framework for understanding a problem. This can result in epistemic injustice as AI systems reinforce negative effects intrinsic to their fixed epistemic framework. I demonstrate how this occurs in medicine by analyzing three examples where AI is used in medicine: diagnosis, prognosis, and pain management decisions.

First, I consider diagnosis. Scrutton (2017) identifies two issues occurring in the context of traditional medical diagnosis: 1) service users being treated as data sources rather than participants in the diagnostic process, and 2) service users’ experiences being forced into an existing understanding to the exclusion of other factors important to the service user. Recent proposals to use AI to “diagnose” schizophrenia based on speech patterns do nothing to alleviate these concerns (Corcoran & Cecchi, 2020). Instead, the use of AI in psychiatry exacerbates the issue of contributory injustice – when a marginalized group cannot contribute equally to the collective understanding of their experiences (Dotson 2012, Miller-Tate 2018). The epistemic calcification of AI systems results in solidifying an understanding of schizophrenia from a single medical perspective, as seen in other cases such as autism (Bennett & Keyes 2019). Not only does this result in a lack of service user participation in the process of diagnosis itself but it renders these diagnostic categories as resistant to change by both service users and medical practitioners. Thus, epistemic calcification summarily dismisses the experiences of service users and how they understand their symptoms, for example, someone who hears voices but does not interpret this through a medical model.

Second, I consider prognosis and pain management. In Adam’s (1998) analysis of traditional AI systems Cyc and Soar, she demonstrated how early AI systems implicitly coded a white, male, middle-class perspective on knowledge. The same concerns apply to modern AI systems used for prognosis and pain management that systematically underestimate how sick Black patients are (Obermeyer et al, 2019, Pierson et al, 2021). Here, there is epistemic calcification of who counts as the ‘knowing subject’, which, in turn, exacerbates existing social biases in medicine.

In summary, I introduce the concept of epistemic calcification to explain the potential for epistemic injustice perpetuated by AI systems in medicine. I end by exploring whether feminist epistemology has the resources to disturb the epistemic calcification of AI and point to further cases of AI in medicine that may benefit from an analysis in terms of epistemic injustice and oppression.

References:
Adam, A., (1998). Artificial Knowing: Gender and the Thinking Machine. London: Routledge.
Bennett, C.L. and Keyes, O., (2020). What is the point of fairness? disability, AI and the complexity of justice. ACM SIGACCESS Accessibility and Computing, (125), pp.1-1.
Carel, H., & Kidd, I. J. (2014). Epistemic injustice in healthcare: a philosophical analysis. Medicine, Health Care and Philosophy, 17(4), 529–540. https://doi.org/10.1007/s11019-014-9560-2
Chin-Yee, B., & Upshur, R. (2019). Three problems with big data and artificial intelligence in medicine. Perspectives in Biology and Medicine, 62(2), 237–256. https://doi.org/10.1353/pbm.2019.0012
Corcoran, C. M., & Cecchi, G. A. (2020). Using Language Processing and Speech Analysis for the Identification of Psychosis and Other Disorders. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 5(8), 770–779. https://doi.org/10.1016/j.bpsc.2020.06.004
Miller Tate, A. (2018). Contributory injustice in psychiatry. J Med Ethics, 1–4. https://doi.org/10.1136/medethics-2018-104761
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science (New York, N.Y.), 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
Pierson, E., Cutler, D. M., Leskovec, J., Mullainathan, S., & Obermeyer, Z. (2021). An algorithmic approach to reducing unexplained pain disparities in underserved populations. Nature Medicine, 27(1), 136–140. https://doi.org/10.1038/s41591-020-01192-7

%d bloggers like this: