Regularizers to the rescue
fighting overfitting in deep learning-based side-channel analysis
More Info
expand_more
Abstract
Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the side-channel domain. Regularization techniques are popular solutions to overfitting and have long been used in various domains. At the same time, the works in the side-channel domain show sporadic utilization of regularization techniques. What is more, no systematic study investigates these techniques’ effectiveness. In this paper, we aim to investigate the regularization effectiveness on a randomly selected model, by applying 4 powerful and easy-to-use regularization techniques to 8 combinations of datasets, leakage models, and deep learning topologies. The investigated techniques are L1, L2, dropout, and early stopping. Our results show that while all these techniques can improve performance in many cases, L1 and L2 are the most effective. Finally, if training time matters, early stopping is the best technique.