Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:03
08 May 2022

Decentralized learning techniques have been increasingly popular for model training on distributed worker nodes. Unfortunately, such learning systems are vulnerable to failures and attacks. In this paper, we consider the decentralized learning problem over communication networks, in which worker nodes collaboratively train a machine learning model by exchanging model parameters with neighbors, but a fraction of nodes are corrupted by a Byzantine attacker and could conduct malicious attacks. Our key idea for mitigating Byzantine attacks is to check the direction and magnitude of the \textit{cross-update} vectors (the difference between each received model and local model from the previous round) at each consensus round. We propose a similarity-based reweighting scheme to obtain a robust local model update for each worker. Our proposed method does not need to know the exact number of Byzantine nodes and can be employed in both static and time-varying networks. We evaluate our method on the Fashion-MNIST dataset with different Byzantine attacks and system sizes. Numerical results demonstrate the robustness of our proposed method against Byzantine attacks and superior performance than existing methods.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00