The ever increasing presence of Machine Learning (ML) algorithms and Artificial Intelligence (AI) agents in safety-critical and sensitive fields over the past few years has spurred massive amounts of research in Explainable Artificial Intelligence (XAI) techniques (models). This
...
The ever increasing presence of Machine Learning (ML) algorithms and Artificial Intelligence (AI) agents in safety-critical and sensitive fields over the past few years has spurred massive amounts of research in Explainable Artificial Intelligence (XAI) techniques (models). This new frontier of AI research aims to resolve some of the fundamental issues that accompany the usage of ML algorithms in sensitive fields such as medicine or criminology. For ML algorithm's to be implemented and used within fields such as medicine, it is not simply enough that they are proficient and effective tool at solving their assigned task (such as classifying whether a patient has Covid-19 or not). These ML techniques lack the ability to allow their human counterparts the possibility of understanding why they have made such a prediction, therefore not allowing the human supervisor a peak into the black box. This black box problem is one of the underlying difficulties that currently prevent the widespread adoption of ML/AI algorithms into these safety-critical fields. XAI techniques aim to solve this very issue and in particular Model-Agnostic XAI techniques aim to generate explanations on the predictions of any ML or AI algorithm. In this paper we will be exploring and investigating the different Model-Agnostic XAI techniques and looking into their individual advantages and disadvantages. After we've analyzed each individual technique, we will take a more global view into the different characteristics and how the XAI implementations compare to each other using different metrics for comparison. Finally we will propose future improvements and extensions that can be made to the various investigated XAI techniques.