Tailoring Attacks To Federated Continual Learning Models

More Info
expand_more

Abstract

Federated learning enables training machine learning models on decentralized data sources without centrally aggregating sensitive information. Continual learning, on the other hand, focuses on learning and adapting to new tasks over time while avoiding the catastrophic forgetting of knowledge from previously encountered tasks. Federated Continual Learning (FCL) addresses this challenge within the framework of federated learning. This thesis investigates how FCL can be made vulnerable to Byzantine attacks (from unpredictable or malicious nodes), which aim to manipulate or corrupt the training process, compromising model performance. We adapt and evaluate four existing attacks from traditional federated learning in the FCL setting. Furthermore, we propose three tailored attacks for FCL are proposed based on the insights gained. Additionally, a novel attack called "Incremental Forgetting" is introduced, which specifically targets the incremental knowledge retention aspect of FCL. Our experimental evaluations of the attacks carried out against various FCL algorithms show that personalizing these towards FCL provides varying degrees of performance benefits, while the novel attack additionally exhibits evidence showing it may be more practical against real-world systems, strengthening its impact on the FCL community. This research contributes to the development of secure and resilient FCL systems to build better defenses against such attacks in the federated learning domain.