Distributed ADMM with Limited Communications via Deep Unfolding
Yoav Noah (Ben-Gurion University of the Negev); Nir Shlezinger (Ben-Gurion University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Distributed optimization arises in various applications. A widely-used distributed optimizer is the distributed alternating direction method of multipliers (D-ADMM) algorithm, which enables agents to jointly minimize a shared objective by iteratively combining local computations and message exchanges. However, D-ADMM often involves a large number of possibly costly communications to reach convergence, limiting its applicability in communications-constrained networks. In this work we propose unfolded D-ADMM, which facilitates the application of D-ADMM with limited communications using the emerging deep unfolding methodology. We utilize the conventional D-ADMM algorithm with a fixed
number of communications rounds, while leveraging data to tune the hyperparameters of each iteration of the algorithm. By doing so, we learn to optimize with limited communications, while preserving the interpretability and flexibility of the original D-ADMM algorithm. Our numerical results demonstrate that the proposed approach dramatically reduces the number of communications utilized by D-ADMM, without compromising on its performance.