Investigation of the evaluation techniques and tools used for model-specific XAI models

More Info
expand_more

Abstract

The spread of AI techniques has lead to its presence in critical situations, with increasing performance that can compromise on its understanding. Users with no prior AI knowledge rely on these techniques such as doctors or recruiters with a need for transparency and comprehensibility of the mechanisms. The advent of Explainable Artificial Intelligence techniques responds to such issues with a diversity that has lead to the construction of a taxonomy for this domain. Notably, there is a distinction between model-specific and model-agnostic techniques. Rightly operational XAI technique should go through an evaluation process. In this paper, we investigate the different available tools and metrics for the evaluation of XAI techniques to then assess the evaluation quality of five state-of-the-art model specific techniques: TCAV, SIDU, ACE, Net2Vec and Concept Analysis with ILP. It has been concluded that despite broad existing literature on evaluation methods, there is a lack of exhaustive assessment of criteria and a lack of standardization in regards of the evaluation of these model-specific- techniques.

Files