Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:13:28
09 Jun 2021

Recently, deep neural networks (DNNs) have shown advantages in accelerating optimization algorithms. One approach is to unfold finite number of iterations of conventional optimization algorithms and to learn parameters in the algorithms. However, these are forward methods and are indeed neither iterative nor convergent. Here, we present a novel DNN-based convergent iterative algorithm that accelerates conventional optimization algorithms. We train a DNN to yield parameters in scaled gradient projection method. So far, these parameters have been chosen heuristically, but have shown to be crucial for good empirical performance. In simulation results, the proposed method significantly improves the empirical convergence rate over conventional optimization methods for various large-scale inverse problems in image processing.

Chairs:
Saiprasad Ravishankar

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00