Skip to main content

SIGNAL COMPRESSION VIA NEURAL IMPLICIT REPRESENTATIONS

Francesca Pistilli, Diego Valsesia, Giulia Fracastoro, Enrico Magli

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:36
10 May 2022

Existing end-to-end signal compression schemes using neural networks are largely based on an autoencoder-like structure, where a universal encoding function creates a compact latent space and the signal representation in this space is quantized and stored. Recently, advances from the field of 3D graphics have shown the possibility of building implicit representation networks, i.e., neural networks returning the value of a signal at a given query coordinate. In this paper, we propose using neural implicit representations as a novel paradigm for signal compression with neural networks, where the compact representation of the signal is defined by the very weights of the network. We discuss how this compression framework works, how to include priors in the design, and highlight interesting connections with transform coding. While the framework is general, and still lacks maturity, we already show very competitive performance on the task of compressing point cloud attributes, which is notoriously challenging due to the irregularity of the domain, but becomes trivial in the proposed framework.