CONTENT-ADAPTIVE PARALLEL ENTROPY CODING FOR END-TO-END IMAGE COMPRESSION
Shujia Li, Dezhao Wang, Zejia Fan, Jiaying Liu
-
SPS
IEEE Members: $11.00
Non-members: $15.00
State-of-the-art entropy models, e.g. autoregressive context models, utilize spatial correlation among latent representations, leading to more accurate entropy estimation. However, this autoregressive design naturally results in serial decoding and the infeasibility of parallelization, which makes the decoding procedure slow and less practical. To address the issue, we propose a Content-Adaptive Parallel Entropy Model (CAPEM) that takes a two-pass context calculation with dynamically generated patterns. Our CAPEM relaxes the strict coding order while the dynamic context mechanism still promotes flexibility in capturing latent dependency. This design greatly improves the parallelism of the context model, leading to higher coding efficiency while maintaining the same rate-distortion performance. We test it on the widely used Kodak and CLIC image datasets. Experimental results show that the proposed model outperforms the recent works with less complexity.