Exploring the Impact of Client Mobility on Decentralized Federated Learning Systems

More Info
expand_more

Abstract

Federated Learning has gained prominence in recent years, in no small part due to its ability to train Machine Learning models with data from users' devices whilst keeping this data private. Decentralized Federated Learning (DFL) is a branch of Federated Learning (FL) that deals with clients directly communicating with each other as opposed to using a central server. Client mobility describes how users' devices move in the real world, and its effects on the learning performance of Hierarchical Federated Learning (HFL) systems have been found to be significant. However, the effects of client mobility on DFL systems have not been explored. In this work, we fill this research gap. First, we develop a model that can describe client mobility in a DFL system. Then, using synthetic datasets, we show that client mobility has a positive impact on learning performance, which we quantify. Moreover, we show that there is a disparity in learning performance between high-mobility and low-mobility clients when using a baseline model aggregation algorithm. To address this disparity, we propose a new mobility-aware model aggregation algorithm. Our experimental results on synthetic datasets show that our solution reduces the disparity in learning performance between high- and low-mobility clients in the scenarios where this disparity is greatest, with no appreciable downsides in global learning performance.