Paper ID | SS-NNC.4 | ||
Paper Title | ENCODER OPTIMIZATIONS FOR THE NNR STANDARD ON NEURAL NETWORK COMPRESSION | ||
Authors | Paul Haase, Daniel Becking, Heiner Kirchhoffer, Karsten Müller, Heiko Schwarz, Wojciech Samek, Detlev Marpe, Thomas Wiegand, Fraunhofer Heinrich-Hertz-Institute, Germany | ||
Session | SS-NNC: Special Session: Neural Network Compression and Compact Deep Features | ||
Location | Area B | ||
Session Time: | Tuesday, 21 September, 08:00 - 09:30 | ||
Presentation Time: | Tuesday, 21 September, 08:00 - 09:30 | ||
Presentation | Poster | ||
Topic | Special Sessions: Neural Network Compression and Compact Deep Features: From Methods to Standards | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | The novel Neural Network Compression and Representation Standard (NNR), recently issued by ISO/IEC MPEG, achieves very high coding gains, compressing neural networks to 5% in size without accuracy loss. The underlying NNR encoder technology includes parameter quantization, followed by efficient arithmetic coding, namely DeepCABAC. In addition, NNR also allows very flexible adaptations, such as signaling specific local scaling values, setting quantization parameters per tensor rather than per network and supporting specific parameter fusion operations. This paper presents our new approach for optimally deriving these parameters, namely the derivation of parameters for local scaling adaptation (LSA), inference-optimized quantization (IOQ), and batch-norm folding (BNF). By allowing inference and fine tuning within the encoding process, quantization errors are reduced and the NNR coding efficiency is further improved to a compressed bitstream size of only 3% in comparison to the original model size. |