Federated learning (FL) allows the collaborative training of a model while keeping data decentralized. However, FL has been shown to be vulnerable to poisoning attacks. Model poisoning, in particular, enables adversaries to manipulate their local updates, leading to a significant
...
Federated learning (FL) allows the collaborative training of a model while keeping data decentralized. However, FL has been shown to be vulnerable to poisoning attacks. Model poisoning, in particular, enables adversaries to manipulate their local updates, leading to a significant degradation of the global model accuracy. Most state-of-the-art attacks rely on prior knowledge of the server's aggregation rules (AGRs), so-called AGR-tailored attacks, making them more effective than AGR-agnostic attacks that lack such information. In this paper, we propose AIDA (Adaptive Inference-Driven Attack), the first adaptive AGR-agnostic attack. This attack begins with an AGR-agnostic approach where it analyzes the training process to identify the aggregation rule that the server is using, and it then transitions to a more powerful AGR-tailored attack. Extensive experiments reveal that AIDA substantially outperforms other state-of-the-art AGR-agnostic attacks, achieving additional model degradation of up to 3.01% on Krum, 13.96% on MKrum, and 2.85% on FLAME. These results demonstrate the effectiveness of AIDA and its ability to significantly degrade the global model performance, making it a more generic and powerful attack than existing AGR-agnostic approaches.