Paper ID | SS-MMSDF-2.9 | ||
Paper Title | RECALIBRATED BANDPASS FILTERING ON TEMPORAL WAVEFORM FOR AUDIO SPOOF DETECTION | ||
Authors | Yanzhen Ren, Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, China; Wuyang Liu, Dengkai Liu, School of Cyber Science and Engineering, Wuhan University, China; Lina Wang, Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, China | ||
Session | SS-MMSDF-2: Special Session: AI for Multimedia Security and Deepfake 2 | ||
Location | Area A | ||
Session Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Special Sessions: Artificial Intelligence for Multimedia Security and Deepfake | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Deepfake techniques mislead people’s cognition with highquality fake videos and audios, and speech synthesis is an important tool to implement cognitive attacks, which mainly include Text-To-Speech (TTS) and Voice Conversion (VC). In this paper, we propose a method for audio spoof detection based on frequency band recalibration via sinc convolution and squeeze-excitation module, extracting features from the temporal waveform and emphasizing the frequency bands that are more useful on this task. Experimental results show that the proposed method outperforms other similar methods by 18.6% with an average EER of 7.23%, and achieve better generalizability on the detection of unseen spoofing methods, while the size of the model is reduced by 30.8%. |