Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 14:29
04 May 2020

In this paper, we aim at solving a distributed machine learning problem under Byzantine attacks. In the distributed system, a number of workers (termed as Byzantine workers) could send arbitrary messages to the master and bias the learning process, due to data corruptions, computation errors or malicious attacks. Prior work has considered a total variation (TV) norm-penalized approximation formulation to handle Byzantine attacks, where the TV norm penalty forces the regular workers’ local variables to be close, and meanwhile, tolerates the outliers sent by the Byzantine workers. The stochastic subgradient method, which does not consider the problem structure, is shown to be able to solve the TV norm-penalized approximation formulation. In this paper, we propose a stochastic alternating direction method of multipliers (ADMM) that utilizes the special structure of the TV norm penalty. The stochastic ADMM iterates are further simplified, such that the iteration-wise communication and computation costs are the same as those of the stochastic subgradient method. Numerical experiments on the COVERTYPE and MNIST dataset demonstrate the resilience of the proposed stochastic ADMM to various Byzantine attacks.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00