Unmasking the Power of Trigger Intensity in Federated Learning

Exploring Trigger Intensities in Backdoor Attacks

More Info
expand_more

Abstract

Federated learning allows a multitude of contributors to collaboratively build a deep learning model, all while keeping their individual training data private from one another. However, it is not immune to security flaws such as backdoor attacks in which malevolent adversaries manipulate the global model to trigger specific behaviors. In this paper, we investigate the impact of trigger intensity in backdoor attacks within the federated learning setting. We challenge the conventional requirement of training and testing on the same trigger intensity and propose a novel approach of training on a weak trigger and testing on a stronger trigger. Our experiments demonstrate that this technique proves capable of enhancing backdoor attack performance and robustness and might open the possibility for invisible backdoors during the testing phase. The ability to customize trigger visibility empowers attackers to craft stealthier and more potent attacks, making trigger detection challenging. Our findings complement existing state-of-the-art attacks, providing attackers with greater options to tailor their attack to their intended target. We discuss the implications of our research and highlight the importance of developing effective defense mechanisms to counter backdoor vulnerabilities in federated learning systems. Overall, our study contributes to the advancement of backdoor attack understanding in FL and provides valuable insights into trigger intensity as a critical factor for attack customization and resilience. By exploring new avenues for stronger and more stealthy attacks, we contribute to the ongoing efforts to safeguard AI systems’ privacy and reliability in real-world applications.

Files