SCALABLE LEARNED IMAGE COMPRESSION WITH A RECURRENT NEURAL NETWORKS-BASED HYPERPRIOR
Rige Su, Zhengxue Cheng, Heming Sun, Jiro Katto
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:44
Recently learned image compression has achieved many great progresses, such as representative hyperprior and its variants based on convolutional neural networks (CNNs). However, CNNs are not fit for scalable coding and multiple models need to be trained separately to achieve variable rates. In this paper, we incorporate differentiable quantization and accurate entropy models into recurrent neural networks (RNNs) architectures to achieve a scalable learned image compression. First, we present an RNN architecture with quantization and entropy coding. To realize the scalable coding, we allocate the bits to multiple layers, by adjusting the layer-wise lambda values in Lagrangian multiplier based rate-distortion optimization function. Second, we add an RNN-based hyperprior to improve the accuracy of entropy models for multiplelayer residual representations. Experimental results demonstrate that our performance can be comparable with recent CNN-based hyperprior methods on Kodak dataset. Besides, our method is a scalable and flexible coding approach, to achieve multiple rates using one single model, which is very appealing in practice.