Paper ID | ARS-8.8 | ||
Paper Title | AN UNSUPERVISED OPTICAL FLOW ESTIMATION FOR LIDAR IMAGE SEQUENCES | ||
Authors | Xuezhou Guo, Xuhu Lin, Lili Zhao, Zezhi Zhu, Jianwen Chen, University of Electronic Science and Technology of China, China | ||
Session | ARS-8: Image and Video Mid-Level Analysis | ||
Location | Area I | ||
Session Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation Time: | Monday, 20 September, 13:30 - 15:00 | ||
Presentation | Poster | ||
Topic | Image and Video Analysis, Synthesis, and Retrieval: Image & Video Mid-Level Analysis | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | In recent years, the LiDAR images, as a 2D compact representation of 3D LiDAR point clouds, are widely applied in various tasks, e.g., 3D semantic segmentation, LiDAR point cloud compression (PCC). Among these works, the optical flow estimation for LiDAR image sequences has become a key issue, especially for the motion estimation of the inter prediction in PCC. However, the existing optical flow estimation models are likely to be unreliable for LiDAR images. In this work, we first propose a light-weight flow estimation model for LiDAR image sequences. The key novelty of our method lies in two aspects. One is that for the different characteristics (with the spatial-variation feature distribution) of the LiDAR images w.r.t. the normal color images, we introduce the attention mechanism into our model to improve the quality of the estimated flow. The other one is that to tackle the lack of large-scale LiDAR-image annotations, we present an unsupervised method, which directly minimizes the inconsistency between the reference image and the reconstructed image based on the estimated optical flow. Extensive experimental results have shown that our proposed model outperforms other mainstream models on the KITTI dataset while with much fewer parameters. |