Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 15:50
04 May 2020

Parsimonious modelling, including sparsity and low rankness, has becomes a cornerstone in modern machine learning and signal processing. However, these modelling techniques have limited capabity to learn from large-scale data, and often require some pre-defined parameters to define their optimization procedure. In this paper, we propose a novel method to design specific deep neural networks for sparse and low-rank models, where the network can learn a data-adaptive model from training data. In particular, we design differentiable network units for sparse and low-rank matrices. Each layer of the network represents one iteration of the optimization process of sparse and low-rank models. The effectiveness of the proposed method is evaluated in the task of audio-visual object localization. Experimental results indicate the superior performance of the proposed method over traditional sparse and low-rank models.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00