Skip to main content

Rethinking the PID Optimizer for Stochastic Optimization of Deep Networks

Lei Shi, Yifan o Zhang, Wanguo Wang, Jian Cheng, Hanqing Lu

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 06:16
08 Jul 2020

Stochastic gradient descent with momentum (SGD-Momentum) always causes the overshoot problem due to the integral action of the momentum term. Recently, an ID optimizer is proposed to solve the overshoot problem with the help of derivative information. However, the derivative term suffers from the interference of the high-frequency noise, especially for the stochastic gradient descent method that uses minibatch data in each update step. In this work, we propose a complete PID optimizer, which weakens the effect of the D term and adds a P term to more stably alleviate the overshoot problem. To further reduce the interference of the high-frequency noise, two effective and efficient methods are proposed to stabilize the training process. Extensive experiments on three widely used benchmark datasets with different scales, i.e., MNIST, Cifar10 and TinyImageNet, demonstrate the superiority of our proposed PID optimizer on various popular deep neural networks.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00