A Penalty Alternating Direction Method Of Multipliers For Decentralized Composite Optimization
Jiaojiao Zhang, Anthony Man-Cho So, Qing Ling
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 08:21
This paper proposes a penalty alternating direction method of multipliers (ADMM) to minimize the summation of convex composite functions over a decentralized network. Each agent in the network holds a private function consisting of a smooth part and a nonsmooth part, and can only exchange information with its neighbors during the optimization process. We consider a penalized approximation of the decentralized optimization problem; but unlike the existing penalty methods, here the penalty parameter can be very small such that the approximation error is negligible. On the other hand, the small penalty parameter makes the penalized objective ill-conditioned, such that the popular proximal gradient descent method has to use a small step size, and is hence slow. To address this issue, we propose to solve the penalized formulation with ADMM. We further utilize the composite structures of the private functions through linearizing the smooth parts so as to reduce computational costs, and handling the nonsmooth parts with proximal operators. The proposed penalty ADMM (abbreviated as PAD) is convergent when the private functions are convex, and linearly convergent when the smooth parts are further strongly convex. Numerical experiments corroborate the theoretical analysis and demonstrate the advantages of PAD over existing state-of-the-art algorithms.