Paper ID | ARS-7.5 | ||
Paper Title | AI-GAN: Attack-Inspired Generation of Adversarial Examples | ||
Authors | Tao Bai, Jun Zhao, Jinlin Zhu, Nanyang Technological University, Singapore; Shoudong Han, Huazhong University of Science and Technology, China; Jiefeng Chen, University of Wisconsin-Madison, United States; Bo Li, University of Illinois at Urbana-Champaign, United States; Alex Kot, Nanyang Technological University, United States | ||
Session | ARS-7: Image and Video Interpretation and Understanding 2 | ||
Location | Area H | ||
Session Time: | Wednesday, 22 September, 08:00 - 09:30 | ||
Presentation Time: | Wednesday, 22 September, 08:00 - 09:30 | ||
Presentation | Poster | ||
Topic | Image and Video Analysis, Synthesis, and Retrieval: Image & Video Interpretation and Understanding | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Deep neural networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding imperceptible perturbations to inputs. Recently different attacks and strategies have been proposed, but how to generate adversarial examples perceptually realistic and more efficiently remains unsolved. This paper proposes a novel framework called Attack-Inspired GAN (AI-GAN), where a generator, a discriminator, and an attacker are trained jointly. Once trained, it can generate adversarial perturbations efficiently given input images and target classes. Through extensive experiments on several popular datasets e.g. MNIST and CIFAR-10, AI-GAN achieves high attack success rates and reduces generation time significantly in various settings. Moreover, for the first time, AI-GAN successfully scales to complicated datasets e.g. CIFAR-100 with around $90\%$ success rates among all classes. |