Paper ID | ARS-6.10 | ||
Paper Title | FAST AND ACCURATE SCENE PARSING VIA BI-DIRECTION ALIGNMENT NETWORKS | ||
Authors | Yanran Wu, Shanghai Jiao Tong University, China; Xiangtai Li, Peking University, China; Chen Shi, Shanghai Jiao Tong University, China; Yunhai Tong, Peking University, China; Yang Hua, Queen’s University Belfast, United Kingdom; Tao Song, Ruhui Ma, Haibing Guan, Shanghai Jiao Tong University, China | ||
Session | ARS-6: Image and Video Interpretation and Understanding 1 | ||
Location | Area H | ||
Session Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation Time: | Tuesday, 21 September, 15:30 - 17:00 | ||
Presentation | Poster | ||
Topic | Image and Video Analysis, Synthesis, and Retrieval: Image & Video Interpretation and Understanding | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | In this paper, we propose an effective method for fast and accurate scene parsing called Bidirectional Alignment Network (BiAlignNet). Previously, one representative work BiSeNet [1] uses two different paths (Context Path and Spatial Path) to achieve balanced learning of semantics and details, respectively. However, the relationship between the two paths is not well explored. We argue that both paths can benefit each other in a complementary way. Motivated by this, we propose a novel network by aligning two-path information into each other through a learned flow field. To avoid the noise and semantic gaps, we introduce Gated Flow Alignment Module to align both features in a bidirectional way. Moreover, to make the Spatial Path learn more detailed information, we present an edge-guided hard pixel mining loss to supervise the aligned learning process. Our network achieves 80.1% and 78.5% mIoU in validation and test set of Cityscapes while running at 30 FPS with full resolution inputs. Code and models will be available in due course. |