Paper ID | ARS-9.9 | ||
Paper Title | ATTEND, CORRECT AND FOCUS: A BIDIRECTIONAL CORRECT ATTENTION NETWORK FOR IMAGE-TEXT MATCHING | ||
Authors | Yang Liu, Huaqiu Wang, Chongqing University of Technology, China; Fanyang Meng, Peng Cheng Laboratory, China; Mengyuan Liu, Sun Yat-sen University, China; Hong Liu, Peking University, China | ||
Session | ARS-9: Interpretation, Understanding, Retrieval | ||
Location | Area I | ||
Session Time: | Tuesday, 21 September, 13:30 - 15:00 | ||
Presentation Time: | Tuesday, 21 September, 13:30 - 15:00 | ||
Presentation | Poster | ||
Topic | Image and Video Analysis, Synthesis, and Retrieval: Image & Video Storage and Retrieval | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Image-text matching task aims to learn the fine-grained correspondences between images and sentences. Existing methods use attention mechanism to learn the correspondences by attending to all fragments without considering the relationship between fragments and global semantics, which inevitably lead to semantic misalignment among irrelevant fragments. To this end, we propose a Bidirectional Correct Attention Network (BCAN), which leverages global similarities and local similarities to reassign the attention weight, to avoid such semantic misalignment. Specifically, we introduce a global correct unit to correct the attention focused on relevant fragments in irrelevant semantics. A local correct unit is used to correct the attention focused on irrelevant fragments in relevant semantics. Experiments on Flickr30K and MSCOCO datasets verify the effectiveness of our proposed BCAN by outperforming both previous attention-based methods and state-of-the-art methods. Code can be found at: https://github.com/liuyyy111/BCAN. |