Skip to main content

Tensor Decomposition Via Core Tensor Networks

Jianfu Zhang, Zerui Tao, Liqing Zhang, Qibin Zhao

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:11:08
10 Jun 2021

Tensor decomposition (TD) has shown promising performance in image completion and denoising. Existing methods always aim to decompose one tensor into latent factors or core tensors by optimizing a particular cost function based on a specific tensor model. These algorithms iteratively learn the optima from random initialization given any individual tensor, resulting in slow convergence and low efficiency. In this paper, we propose an efficient TD algorithm that aims to learn a global mapping from input tensors to latent core tensors, under the assumption that the mappings of multiple tensors might be shared or highly correlated. To this end, we train a deep neural network (DNN) to model the global mapping and then apply it to decompose a newly given tensor with high efficiency. Furthermore, the initial values of DNN are learned based on meta-learning methods. By leveraging the pretrained core tensor DNN, our proposed method enables us to perform TD efficiently and accurately. Experimental results demonstrate the significant improvements of our method over other TD methods in terms of speed and accuracy.

Chairs:
Xin Tian

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: Free
    Non-members: Free
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00