Learning When to Stop

A Mutual Information Approach to Prevent Overfitting in Profiled Side-Channel Analysis

More Info
expand_more

Abstract

Today, deep neural networks are a common choice for conducting the profiled side-channel analysis. Unfortunately, it is not trivial to find neural network hyperparameters that would result in top-performing attacks. The hyperparameter leading the training process is the number of epochs during which the training happens. If the training is too short, the network does not reach its full capacity, while if the training is too long, the network overfits and cannot generalize to unseen examples. In this paper, we tackle the problem of determining the correct epoch to stop the training in the deep learning-based side-channel analysis. We demonstrate that the amount of information, or, more precisely, mutual information transferred to the output layer, can be measured and used as a reference metric to determine the epoch at which the network offers optimal generalization. To validate the proposed methodology, we provide extensive experimental results.

Files

Perin2021_Chapter_LearningWhen... (pdf)
(pdf | 6.4 Mb)

Download not available