On the Privacy Bound of Distributed Optimization and its Application in Federated Learning

More Info
expand_more

Abstract

Analyzing privacy leakage in distributed algorithms is challenging as it is difficult to track the information leakage across different iterations. In this paper, we take the first step to conduct a theoretical analysis of the information flow in distributed optimization ensuring that gradients at every iteration remain concealed from others. Specifically, we derive a privacy bound on the minimum information available to the adversary when the optimization accuracy is kept uncompromised. By analyzing the derived bound we show that the privacy leakage depends heavily on the optimization objectives, especially the linearity of the system. To understand how the bound affects privacy, we consider two canonical federated learning (FL) applications including linear regression and neural networks. We find that in the first case protecting the gradients alone is inadequate for protecting the private data, as the established bound potentially exposes all sensitive information. For more complex applications such as neural networks, protecting the gradients can provide certain privacy advantages as it will be more difficult for the adversary to infer the private inputs. Numerical validations are presented to consolidate our theoretical results.

Files

On_the_Privacy_Bound_of_Distri... (pdf)
(pdf | 0.909 Mb)
Unknown license
warning

File under embargo until 23-04-2025