Federated Learning is a machine learning paradigm where the computational load for training the model on the server is distributed amongst a pool of clients who only exchange model parameters with the server. Simulation environments try to accurately model all the intricacies of
...
Federated Learning is a machine learning paradigm where the computational load for training the model on the server is distributed amongst a pool of clients who only exchange model parameters with the server. Simulation environments try to accurately model all the intricacies of such a system. However, current simulators do not properly impose the concept of simulation time, leading to global model inaccuracies and difficulties of replicating reruns of the simulation, which is most prominent in the asynchronous scenarios. To this purpose, we propose a discrete-event simulator for the central server asynchronous case which timestamps all the events in the system prior to execution, reducing variability in client model updates on the server. We also introduce a log-structure used to keep states of the simulation, making client inspection possible based on time. We evaluate the proposed discrete-event simulator on the baseline simulator of Flower, reducing standard deviation amongst server model updates for 31.5% and improving accuracy with heterogeneous clients in the MNIST case for 3.3% on average.