-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:09:50
Deep neural networks (DNN) are typically optimized using stochastic gradient descent (SGD). However, the estimation of the gradient using stochastic samples tends to be noisy and unreliable, resulting in large gradient variance and bad convergence. In this paper, we propose Kalman Optimizor (KO), an efficient stochastic optimization algorithm that adopts Kalman filter to make consistent estimation of the local gradient by solving an adaptive filtering problem. Our method reduces estimation variance in stochastic gradient descent by incorporating the historic state of the optimization. It aims to improve noisy gradient direction as well as accelerate the convergence of learning. We demonstrate the effectiveness of the proposed Kalman Optimizer under various optimization tasks where it is shown to achieve superior and robust performance. The code is available at https://github.com/Adamdad/Filter-Gradient-Decent.
Chairs:
Mert Pilanci