Dissecting scientific explanation in AI (sXAI)

A case for medicine and healthcare

More Info
expand_more

Abstract

Explanatory AI (XAI) is on the rise, gaining enormous traction with the computational community, policymakers, and philosophers alike. This article contributes to this debate by first distinguishing scientific XAI (sXAI) from other forms of XAI. It further advances the structure for bona fide sXAI, while remaining neutral regarding preferences for theories of explanations. Three core components are under study, namely, i) the structure for bona fide sXAI, consisting in elucidating the explanans, the explanandum, and the explanatory relation for sXAI: ii) the pragmatics of explanation, which includes a discussion of the role of multi-agents receiving an explanation and the context within which the explanation is given; and iii) a discussion on Meaningful Human Explanation, an umbrella concept for different metrics required for measuring the explanatory power of explanations and the involvement of human agents in sXAI. The kind of AI systems of interest in this article are those utilized in medicine and the healthcare system. The article also critically addresses current philosophical and computational approaches to XAI. Amongst the main objections, it argues that there has been a long-standing interpretation of classifications as explanation, when these should be kept separate.

Files

1_s2.0_S0004370221000497_main.... (pdf)
(pdf | 0.585 Mb)
Unknown license

Download not available