Effect of Homomorphic Encryption on the Performance of Training Federated Learning Generative Adversarial Networks

More Info
expand_more

Abstract

A Generative Adversarial Network (GAN) is a deep-learning generative model in the field of Ma- chine Learning (ML) that involves training two Neural Networks (NN) using a sizable data set. In certain fields, such as medicine, the data involved in training may be hospital patient records that are stored across different hospitals. The classic cen- tralized implementation would involve sending the data to a centralized server where the model would be trained. However, that would involve breach- ing the privacy and confidentiality of the patients and their data, and would be unacceptable. There- fore, Federated Learning (FL), a ML technique that trains ML models in a distributed setting without data every leaving the host device, would be a bet- ter alternative to the centralized option. In this ML technique, only parameters and certain meta- data would be communicated. In spite of that, there still exist attacks that can infer user data using the parameters and metadata. A fully privacy preserv- ing solution involves homomorphically encrypting (HE) the data communicated. This paper will focus on the performance loss of training a FL-GAN with three different types of homomorphic encryption: Partial Homomorphic Encryption (PHE), Some- what Homomorphic Encryption (SHE), and Fully Homomorphic Encryption (FHE). We will also test the performance loss of Multi Party Computations (MPC), as it has homomorphic properties. The per- formance will be compared to the performance of training an FL-GAN without encryption. Our ex- periments show that the more complex the encryp- tion method is, the longer it takes, with the extra time taken for HE being quite significant in com- parison to the base case of FL.