Paper ID | SMR-3.6 | ||
Paper Title | A ONE-SHOT TEXTURE-PERCEIVING GENERATIVE ADVERSARIAL NETWORK FOR UNSUPERVISED SURFACE INSPECTION | ||
Authors | Lingyun Gu, Tsinghua University, China; Lin Zhang, University of Cincinnati, United States; Zhaokui Wang, Tsinghua University, China | ||
Session | SMR-3: Image and Video Representation | ||
Location | Area F | ||
Session Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Image and Video Sensing, Modeling, and Representation: Image & video representation | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Visual surface inspection is a challenging task owing to the highly diverse appearance of target surfaces and defective regions. Previous attempts heavily rely on vast quantities of training examples with manual annotation. However, in some practical cases, it is difficult to obtain a large number of samples for inspection. To combat it, we propose a hierarchical texture-perceiving generative adversarial network (HTP-GAN) that is learned from the one-shot normal image in an unsupervised scheme. Specifically, the HTP-GAN contains a pyramid of convolutional GANs that can capture the global structure and fine-grained representation of an image simultaneously. This innovation helps distinguishing defective surface regions from normal ones. In addition, in the discriminator, a texture-perceiving module is devised to capture the spatially invariant representation of normal image via directional convolutions, making it more sensitive to defective areas. Experiments on a variety of datasets consistently demonstrate the effectiveness of our method. |