Paper ID | IMA-ELI-1.1 | ||
Paper Title | Identity-Free Facial Expression Recognition using conditional Generative Adversarial Network | ||
Authors | Jie Cai, Zibo Meng, InnoPeak Technology, United States; Ahmed Shehab Khan, James O’Reilly, Zhiyuan Li, University of South Carolina, United States; Shizhong Han, Qualcomm AI Research, United States; Yan Tong, University of South Carolina, United States | ||
Session | IMA-ELI-1: Imaging and Media Applications + Electronic Imaging | ||
Location | Area F | ||
Session Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation Time: | Monday, 20 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Imaging and Media Applications: Image and video processing over networks | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | A novel Identity-Free conditional Generative Adversarial Network (IF-GAN) was proposed for Facial Expression Recognition (FER) to explicitly reduce high inter-subject variations caused by identity-related facial attributes, e.g., age, race, and gender. As part of an end-to-end system, a cGAN was designed to transform a given input facial expression to an “average” identity face with the same expression as the input. Then, identity-free FER is possible since the generated images have the same synthetic “average” identity and differ only in their displayed expressions. Experiments on four facial expression datasets, one with spontaneous expressions, show that IF-GAN outperforms the baseline CNN and achieves state-of-the-art performance for FER. |