Paper ID | ARS-3.1 | ||
Paper Title | SELF-SUPERVISED BODYMAP-TO-APPEARANCE CO-ATTENTION FOR PARTIAL PERSON RE-IDENTIFICATION | ||
Authors | Ci-Siang Lin, Yu-Chiang Frank Wang, National Taiwan University, Taiwan | ||
Session | ARS-3: Image and Video Biometric Analysis | ||
Location | Area H | ||
Session Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation | Poster | ||
Topic | Image and Video Analysis, Synthesis, and Retrieval: Image & Video Interpretation and Understanding | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Person re-identification (re-ID) aims at recognizing the same person across distinct camera views. Although a series of works have been proposed to tackle re-ID, most existing methods are based on the assumption that full body detection is available. In practice, detection of pedestrians may not be perfect due to partial occlusion or background clutter, which would result in the challenging task of partial re-ID. To tackle this problem, we propose a novel deep learning framework which jointly performs image rescaling and bodymap-to-appearance co-attention, followed by image matching for re-ID purposes. The image rescaler presented in our framework learns to produce distortion-free images in a self-supervised manner, which allows the subsequent image matching based on the associated image regions via body part attention. Our quantitative and qualitative results on two benchmark partial re-ID datasets confirm the effectiveness of our approach and its superiority over state-of-the-art partial re-ID approaches. |