Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:03
04 May 2020

Generalization of a deep neural network (DNN) is one major concern when employing the deep learning approach for solving practical problems. In this paper we propose a new technique, named projected weight regularization (PWR), to improve the generalization capacity of a DNN model. Consider a weight matrix W from a particular neural layer in the model. Our objective is to make the eigenvalues of the matrix product WW^T have comparable or roughly the same magnitudes while allowing the DNN model to fit the training data sufficiently accurate. Intuitively speaking, by doing so, it would prevent the W matrix from matching the training data too well. Specifically, at each iteration, we first project the W matrix to a number of vectors along randomly generated directions. After that, we build an objective function of the projected vectors to regularize their behaviours towards comparable eigenvalue magnitudes of WW^T. Experimental results on training VGG16 for CIFAR10 show that PWR combined with centered weight normalization (CWN) yields promising validation performance compared to orthonormal regularisation combined with CWN.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00