Effect of parameter tuning on reducing the number of queries required to perform model stealing

More Info
expand_more

Abstract

Model extraction attacks are attacks which generate a substitute model of a targeted victim neural network. It is possible to perform these attacks without a preexisting dataset, but doing so requires a very high number of queries to be sent to the victim model. This is otfen in the realm of several million queries. The more difficult the dataset, the more queries required to gain an accurate substitute model. Through each state-of-the-art model extraction algorithm, one thing that is not thoroughly optimised are the hyperparameters of the models, and optimizing them has been found to have a strong impact on accuracy of the substitute model. To attempt to reduce the number of queries required, research has been done to find the effects of optimizing hyperparameters for both MNIST and fashionMNIST datasets. This is done through grid search and random search. The results show that proper hyperparameter tuning can reduce the number of queries required to perform model stealing if they are not already optimized. Examples include requiring 125000 + queries to achieve 95% accuracy for the MNIST dataset with some hyperparameter combinations to only requiring 15000.