Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 14:59
04 May 2020

Gradient-based algorithms play an important role in solving a wide range of stochastic optimization problems. In recent years, implementing such schemes in parallel has become the new paradigm. In this work, we focus on the asynchronous implementation of gradient-based algorithms. In asynchronous distributed optimization, the gradient delay problem arises since optimization parameters may be updated using stale gradients. We consider a hub-and-spoke distributed system and derive the expected gradient staleness in terms of other system parameters such as the number of nodes, communication delay, and the expected compute time. Our derivations provide a means to compare different algorithms based on the expected gradient staleness they suffer from.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00