Paper ID | 3D-2.1 | ||
Paper Title | VIDEO-BASED DYNAMIC MESH CODING | ||
Authors | Danillo Graziosi, Sony Corporation of America, United States | ||
Session | 3D-2: Point Cloud Processing 2 | ||
Location | Area J | ||
Session Time: | Wednesday, 22 September, 08:00 - 09:30 | ||
Presentation Time: | Wednesday, 22 September, 08:00 - 09:30 | ||
Presentation | Poster | ||
Topic | Three-Dimensional Image and Video Processing: Point cloud processing | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | In this paper we present a video-based approach for coding dynamic meshes, e.g. meshes that change their connectivity at each frame. Inspired by the new Visual Volumetric Video-Based Coding (V3C) standard, which is used to code volumetric content such as point clouds and immersive video, we present a method to encode meshes by using orthogonal projections, followed by atlas packing and video coding. The approach leverages the widely available video codecs and allows the reconstruction of high-quality dynamic meshes from videos and a thin layer of metadata information. It extends the capability of the V3C standards to include dynamic meshes. We present experimental results with dynamic meshes converted from high-quality point clouds that were used during the point cloud standardization process. Additionally, we use a metric that is being considered by the MPEG group for mesh evaluation and compare our results with a state-of-the-art mesh compression approach. |