Paper ID | SS-MMSDF-1.7 | ||
Paper Title | FROM GRADIENT LEAKAGE TO ADVERSARIAL ATTACKS IN FEDERATED LEARNING | ||
Authors | Jia Qi Lim, Chee Seng Chan, Universiti Malaya, Malaysia | ||
Session | SS-MMSDF-1: Special Session: AI for Multimedia Security and Deepfake 1 | ||
Location | Area B | ||
Session Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Special Sessions: Artificial Intelligence for Multimedia Security and Deepfake | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Deep neural networks (DNN) are widely used in real-life applications despite the lack of understanding on this technology and its challenges. Data privacy is one of the bottlenecks that is yet to be overcome and more challenges in DNN arise when researchers start to pay more attention to DNN vulnerabilities. In this work, we aim to cast the doubts towards the reliability of the DNN with solid evidence particularly in Federated Learning environment by utilizing an existing privacy breaking algorithm which inverts gradients of models to reconstruct the input data. By performing the attack algorithm, we exemplify the data reconstructed from inverting gradients algorithm as a potential threat and further reveal the vulnerabilities of models in representation learning. Pytorch implementation are provided at https://github.com/Jiaqi0602/adversarial-attack-from-leakage/ |