Paper ID | SS-MMSDF-1.5 | ||
Paper Title | TOWARDS UNIVERSAL PHYSICAL ATTACKS ON CASCADED CAMERA-LIDAR 3D OBJECT DETECTION MODELS | ||
Authors | Mazen Abdelfattah, Kaiwen Yuan, Z. Jane Wang, Rabab Ward, University of British Columbia, Canada | ||
Session | SS-MMSDF-1: Special Session: AI for Multimedia Security and Deepfake 1 | ||
Location | Area B | ||
Session Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Special Sessions: Artificial Intelligence for Multimedia Security and Deepfake | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | We propose a universal and physically realizable adversarial attack on a cascaded multi-modal deep learning network (DNN), in the context of self-driving cars. DNNs have achieved high performance in 3D object detection, but they are known to be vulnerable to adversarial attacks. These attacks have been heavily investigated in the RGB image domain and more recently in the point cloud domain, but rarely in both domains simultaneously - a gap to be filled in this paper. We use a single 3D mesh and differentiable rendering to explore how perturbing the mesh's geometry and texture can reduce the robustness of DNNs to adversarial attacks. We attack a prominent cascaded multi-modal DNN, the Frustum-Pointnet model. Using the popular KITTI benchmark, we showed that the proposed universal multi-modal attack was successful in reducing the model’s ability to detect a car by nearly 73%. This work can aid in the understanding of what the cascaded RGB-point cloud DNN learns and its vulnerability to adversarial attacks. |