Paper ID | MLR-APPL-IVASR-6.8 | ||
Paper Title | HSEGAN: HAIR SYNTHESIS AND EDITING USING STRUCTURE-ADAPTIVE NORMALIZATION ON GENERATIVE ADVERSARIAL NETWORK | ||
Authors | Wanling Fan, Jiayuan Fan, Fudan University, China; Gang Yu, Bin Fu, Tencent, China; Tao Chen, Fudan University, China | ||
Session | MLR-APPL-IVASR-6: Machine learning for image and video analysis, synthesis, and retrieval 6 | ||
Location | Area D | ||
Session Time: | Wednesday, 22 September, 08:00 - 09:30 | ||
Presentation Time: | Wednesday, 22 September, 08:00 - 09:30 | ||
Presentation | Poster | ||
Topic | Applications of Machine Learning: Machine learning for image & video analysis, synthesis, and retrieval | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Human hair is a kind of special material with complex and varied high-frequency details. It is a challenging task to synthesize and edit realistic and fine-grained hair using deep learning methods. In this paper, we propose HSEGAN, a novel framework consisting of two condition modules encoding foreground hair and background respectively, followed by a hair synthesis generator that synthesizes the final result based on the encoded input. For the purpose of efficient and effective hair generation, we propose hair structure-adaptive normalization (HSAN) and use several HSAN residual blocks to build the hair synthesis generator. HSEGAN allows for explicit manipulation of hair at three different levels, including color, structure and shape. Extensive experiments on FFHQ dataset demonstrate our method can generate higher-quality hair images than state-of-the-art methods, yet consume less time in the inference stage. |