Explaining Link Prediction in Graph Neural Networks

More Info
expand_more

Abstract

Graph Neural Networks (GNNs) have achieved state-of-the-art performance in various applications due to their ability to capture complex structural relationships within graph data. However, their inherent black-box nature poses significant challenges to model interpretability, particularly in the context of link prediction tasks. This thesis investigates the explainability of GNNs for link prediction, a critical task in domains such as social network analysis and recommendation systems. We propose an efficient extension of the Zorro explainer, originally designed for node classification, to address the unique challenges of explaining link predictions. Our method leverages top similar nodes for initialization and incorporates techniques to accelerate the greedy search process, ultimately producing binary node and feature masks as explanations. Additionally, we introduce a novel Edge-level RDT-Fidelity metric to better evaluate explainers that generate edge masks, complementing existing metrics like RDT-Fidelity and Sparsity. Experimental results on the Cora and PubMed datasets demonstrate the superior performance and efficiency of our Zorro extension. This work contributes to the growing field of GNN interpretability by providing new methodologies and evaluation frameworks specifically tailored for link prediction.