A Survey On Convolutional Neural Network Explainability Methods

More Info
expand_more

Abstract

Artificial Intelligence (AI) is increasingly affecting people’s lives. AI is even employed in fields where human lives depend on the AI’s decisions. However, these algorithms lack transparency, i.e. it is unclear how they determine the outcome. If, for instance, the AI’s purpose is to classify an image, the AI will learn this from examples provided to it (e.g. an image of a cow in a meadow). The algorithm can focus on the wrong part of the image. Instead of focusing on the foreground (cow), it could focus on the background (meadow). This way, by focusing on the background, it could produce a false output (e.g. a horse instead of a cow). To show this, an explanation is needed. For this reason, a variety of methods have been created to explain the reasoning behind these algorithms, called explainability methods. In this paper, six local explainability methods are discussed and compared. These methods were chosen as they were the most prominently used approaches for explainability methods for Convolutional Neural Networks (CNN). By comparing methods with analogous characteristics, this paper is going to show what methods exceed others in terms of performance. Furthermore, their advantages and limitations are being discussed. The comparison shows that Local Interpretable Model-agnostic Explanations, Layer-wise Relevance Propagation and Gradient-weighted Class Activation Mapping perform better than Sensitivity Analysis, Deep Taylor Decomposition and Deconvolutional Network, respectively.