Optimization Guarantees for ISTA and ADMM based Unfolded Networks
Wei Pu, Miguel Rodrigues, Yonina Eldar
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:55
Recently, unfolding techniques have been widely utilized to solve the inverse problems in various applications. In this paper, we study optimization guarantees for two popular unfolded networks, i.e., unfolded networks derived from iterative soft thresholding algorithms (ISTA) and derived from Alternating Direction Method of Multipliers (ADMM). Our guarantees -- leveraging the Polyak-Lojasiewicz* (PL*) condition -- state that the training (empirical) loss decreases to zero with the increase in the number of gradient descent epochs provided that the number of training samples is less than some threshold that depends on various quantities underlying the desired information processing task. Our guarantees also show that this threshold is larger for unfolded ISTA in comparison to unfolded ADMM, suggesting that there are certain regimes of number of training samples where the training error of unfolded ADMM does not converge to zero whereas the training error of unfolded ISTA does. A number of numerical results are provided backing up our theoretical findings.