Paper ID | MLR-APPL-IVSMR-2.12 | ||
Paper Title | SENSOR ADVERSARIAL TRAITS: ANALYZING ROBUSTNESS OF 3D OBJECT DETECTION SENSOR FUSION MODELS | ||
Authors | Won Park, Nan Liu, University of Michigan, United States; Qi Alfred Chen, University of California, Irvine, United States; Z. Morley Mao, University of Michigan, United States | ||
Session | MLR-APPL-IVSMR-2: Machine learning for image and video sensing, modeling and representation 2 | ||
Location | Area D | ||
Session Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Applications of Machine Learning: Machine learning for image & video sensing, modeling, and representation | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs. In this work, we perform the first study to analyze the robustness of a high-performance, open source sensor fusion model architecture towards adversarial attacks and challenge the popular belief that the use of additional sensors automatically mitigate the risk of adversarial attacks. We find that despite the use of a LIDAR sensor, the model is vulnerable to our purposefully crafted image-based adversarial attacks including disappearance, universal patch, and spoofing. After identifying the reasoning behind this, we explore some potential defenses and provide some recommendations for improved sensor fusion models |