With the rapid development of artificial intelligence (AI) technologies, deep learning-based structural health monitoring (DeepSHM) methods have gained significant attention. However, their black box nature often limits interpretability and trust. The field of Explainable AI (XAI
...
With the rapid development of artificial intelligence (AI) technologies, deep learning-based structural health monitoring (DeepSHM) methods have gained significant attention. However, their black box nature often limits interpretability and trust. The field of Explainable AI (XAI) aims to address this by enhancing model transparency and reliability through human-comprehensible explanations. This study investigates the use of XAI algorithms in interpreting a 1D convolutional neural network (1D CNN) developed for Lamb wave monitoring of bolt-loosening detection in multi-bolted double-layer aluminum plates under varying temperatures. Four existing XAI algorithms were employed, including Sensitivity Analysis, Deep Taylor, Gradient-weighted Class Activation Mapping (Grad CAM) and Guided Grad CAM. In addition, this paper introduces two new XAI methods, Smooth Simple Taylor and Deep Grad CAM as an enhancement of the Simple Taylor and Grad CAM methods, respectively. These six XAI algorithms were used to establish the relation between the 1D CNN model parameters and the input vector. The results were evaluated for their effectiveness in comparison to the physical insights of the input vector using two proposed methods, namely the Correlation Coefficient with Residual Signal and the Residual Signal Weighted Importance Score Ratio. The results of the evaluation methods, in conjunction with Infidelity, Sense sum, and Sanity check, were utilized to rank the performance of the six XAI algorithms. The rankings were consistent in both simulation and experiment data sets, and the newly proposed XAI algorithm, Smooth Simple Taylor, appeared to be the best in both data sets. Overall, this research establishes a novel approach to using XAI algorithms to enhance the explainability of AI in practical engineering applications.
@en