Little or Large?

The effects of network size on AI explainability in Side-Channel Attacks

More Info
expand_more

Abstract

For a system to be able to interpret data, learn from it, and use those learnings to reach goals and perform tasks is what it means to be intelligent [1]. Since systems are not a product of nature, but rather made by humans they are called Artificial Intelligence (AI). The field of Side-Channel Attacks (SCA) has benefited from applying AI systems to their problems. Operations previously to resource-intensive to perform can now be executed using AI. Currently, the focus lies on exploring which parameters result in the optimum performance when classifying side-channel traces. But since this application has only recently been applied, there is much more research to be done. As of now, the literature claims that a reduction in the size of the architecture would result in an improvement of the explainability of the models used. However, this change in explainability has not been explicitly proven to hold for SCA models. This created a gap in knowledge. This paper aims to close this gap by exploring these assumptions. The goal is to explore if a reduction in complexity of SCA models leads to improved explainability. An experiment was conducted using two existing SCA architectures with a small and large complexity respectively. Using heatmaps, the explainability of these models were assessed to investigate the existence of patterns. The results show a difference in the consistency of the classification process, where the model with the lowest complexity could more consistently state why a certain classification was made. The results indicate that the explainability of a given SCA model can be improved by decreasing its complexity.