PAID

Perturbed Image Attacks Analysis and Intrusion Detection Mechanism for Autonomous Driving Systems

More Info
expand_more

Abstract

Modern Autonomous Vehicles (AVs) leverage road context information collected through sensors (e.g., LiDAR, radar, and camera) to support the automated driving experience. Once such information is collected, a neural network model predicts subsequent actions that the AV executes. However, state-of-the-art research findings have shown the possibility that an attacker can compromise the accuracy of the neural network model in predicting tasks. Indeed, mispredicting the subsequent actions can cause harmful consequences to the road user's safety. In this paper, we analyze the disruptive impact of adversarial attacks on road context-aware Intrusion Detection System (RAIDS) and propose a solution to mitigate such effects. To this end, we implement five state-of-the-art evasion attacks on vehicle camera images the IDS uses to monitor internal vehicular traffic. Our experimental results underline how this type of attack can reduce the attack detection accuracy of such detectors down to 2.83%. To combat such adversarial attacks, we investigate different countermeasure and propose PAID, a robust context-aware IDS that leverage feature squeezing and GPS to detect intrusions. We evaluate PAID's capability in identifying such attacks, and implementation results confirm that PAID achieves a detection accuracy of up to 93.9%, outperforming RAIDS's performance.