Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 15:07
04 May 2020

The paper considers the problem of network-based computation of global minima in smooth nonconvex optimization problems. It is known that distributed gradient-descent-type algorithms can achieve convergence to the set of global minima by adding slowly decaying Gaussian noise in order to escape local minima. However, the technical assumptions under which convergence is known to occur can be restrictive in practice. In particular, in known convergence results, the local objective functions possessed by agents are required to satisfy a highly restrictive bounded-gradient-dissimilarity condition. The paper demonstrates convergence to the set of global minima while relaxing this key assumption.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00