Using Personalized Federated Learning to train Diffusion Models

More Info
expand_more

Abstract

Federated Learning (FL) is widely favoured in the training of machine learning models due to its privacy-preserving and data diversity benefits. In this research paper, we investigate an extension of FL referred to as Personalized Federated Learning (PFL) for the purpose of training diffusion models. We explore the personalization technique of Trans- fer Learning (TL) and analyse evaluation metrics to capture personalization scores. Transfer Learning has been proven to produce good personalization results under IID and non-IID data distributions. We explore the impact of specific hyperparame- ters and data distribution techniques and examine how the personalization results can be improved even further. We demonstrate that the learning rate and the number of base layers of the convolutional neural network(CNN) form a normal distribution in terms of per-user improvement results. Increasing the number of users creates unstable converges re- sults with the per-user personalization score experi- encing an overall improvement over the pre-trained model independent of the number of users. Our evaluations show that tuning to the optimal hyper- parameter values for specific non-IID data distri- butions produces better personalization scores than other PFL methods.