Evaluating the Explainability of Graph Neural Networks for Disease Subnetwork Detection

More Info
expand_more

Abstract

Graph neural networks (GNNs), while effective at various tasks on complex graph-structured data, lack interpretability. Post-hoc explainability techniques developed for these GNNs in order to overcome their inherent uninterpretability have been applied to the additional task of detecting important subnetworks in graphs. For example, the GNN-SubNet program uses explanations of protein-protein interaction networks to detect the most important disease subnetworks for specific types of cancer. However, when using a post-hoc explanation for such additional tasks, evaluating the quality of the explanation becomes critical.
This study implements four explainability evaluation metrics to provide a fast and accurate way of evaluating explainability, using the GNN-SubNet program as a case study of explainable GNNs for subnetwork detection. Fidelity and sparsity metrics are implemented as defined in existing literature, while validity+ and validity- are newly defined. The results show that GNN-SubNet finds robust and faithful but highly dense explanations.