The significant progress of Artificial Intelligence (AI) and Machine Learning (ML) techniques such as Deep Learning (DL) has seen success in their adoption in resolving a variety of problems. However, this success has been accompanied by increasing model complexity resulting in a
...
The significant progress of Artificial Intelligence (AI) and Machine Learning (ML) techniques such as Deep Learning (DL) has seen success in their adoption in resolving a variety of problems. However, this success has been accompanied by increasing model complexity resulting in a lack of transparency and trustworthiness. Explainable Artificial Intelligence (XAI) has been proposed as a solution to the need for trustworthy AI/ML systems. A large number of studies about XAI are published in recent years, with a majority discussing the specifics of XAI. Hence this work aims to formalize existing XAI literature from a high-level approach in terms of (1) benefits, (2) requirements, (3) challenges and (4) the underlying building blocks involved. Additionally, this paper presents a case study of XAI within the medical image analysis domain followed by future works and research directions in the field and from a general perspective, all to serve as a foundation and reference point to make the topic more accessible to novices.