Paper ID | SMR-2.10 | ||
Paper Title | PROXIQA: A PROXY APPROACH TO PERCEPTUAL OPTIMIZATION OF LEARNED IMAGE COMPRESSION | ||
Authors | Li-Heng Chen, University of Texas at Austin, United States; Christos Bampis, Zhi Li, Andrey Norkin, Netflix, United States; Alan Bovik, University of Texas at Austin, United States | ||
Session | SMR-2: Perception and Quality Models | ||
Location | Area F | ||
Session Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation | Poster | ||
Topic | Image and Video Sensing, Modeling, and Representation: Perception and quality models for images & video | ||
Abstract | The use of lp (p = 1,2) norms has largely dominated the measurement of loss in neural networks due to their simplicity and analytical properties. However, when used to assess the loss of visual information, these simple norms are not very consistent with human perception. Here, we describe a different “proximal” approach to optimize image analysis networks against quantitative perceptual models. Specifically, we construct a proxy network, broadly termed ProxIQA, which mimics the perceptual model while serving as a loss layer of the network. We experimentally demonstrate how this optimization framework can be applied to train an end-to-end optimized image compression network. By building on top of an existing deep image compression model, we are able to demonstrate a bitrate reduction of as much as 31% over MSE optimization, given a specified perceptual quality (VMAF) level. |