Improving privacy of Federated Learning Generative Adversarial Networks using Intel SGX

More Info
expand_more

Abstract

Federated learning (FL), although a major privacy improvement over centralized learning, is still vulnerable to privacy leaks. The research presented in this paper provides an analysis of the threats to FL Generative Adversarial Networks. Furthermore, an implementation is provided to better protect the data of the participants with Trusted Execution Environments (TEEs), using Intel Software Guard Extensions. Lastly, the viability of it’s use in practice is evaluated and discussed. The results indicate that this approach protects the data, while not affecting the predicting capabilities of the model, with a noticeable but manageable impact on the training duration.