With the rise of deep learning and the widespread use of deep neural networks, backdoor attacks have become a significant security threat, drawing considerable research interest. One such attack is the SIG backdoor attack, which introduces signals to the images. We look into thre
...
With the rise of deep learning and the widespread use of deep neural networks, backdoor attacks have become a significant security threat, drawing considerable research interest. One such attack is the SIG backdoor attack, which introduces signals to the images. We look into three types of SIG backdoor attacks - ramp, triangle, and sinusoidal signals. Most of the works in the field of AI security, however, have focused on deep classification tasks, leaving deep regression tasks unexplored. In this study, we adapt the SIG backdoor attack for use in a deep regression model (DRM) used to estimate head pose. Our objective is to create a backdoor attack that remains imperceptible to the human eye while being detectable by the DRM. To evaluate the effectiveness of our attack, we employ two approaches: average angular error and accuracy in a discretized continuous space. Additionally, we adapt fine-tuning as a countermeasure against the backdoor attack. By implementing this strategy, we aim to reduce the risk of backdoor attacks and improve the robustness of deep regression models in head pose estimation.