Skip to main content

Styx: Adaptive Poisoning Attacks against Byzantine-Robust defenses in Federated Learning

Yuxin Wen (University of Maryland); Jonas A. Geiping (University of Maryland, College Park); Micah Goldblum (University of Maryland); Tom Goldstein (University of Maryland, College Park)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

Decentralized training of machine learning models, for instance with federated learning protocols, continues to diffuse from theory toward practical applications and use cases. In federated learning (FL), a central server trains a model collaboratively with a group of users by communicating model updates, without the exchange of private user information. However, these systems can be influenced during training by malicious users who send poisoned updates. Because the training is decentralized and each user controls their own device, these users are free to poison the training protocol. In turn, this has lead to a number of proposals to incorporate aggregation strategies from byzantine-robust learning into the FL paradigm. Byzantine strategies are provably secure for simple model classes, and these robustness properties are often assumed to extend to neural models as well. In this work, we argue that a range of popular robust aggregation strategies, when applied to neural networks, can be trivially circumvented through simple adaptive attacks. We discuss the intuitions behind these adaptive attacks, and show that, despite their simplicity, they provide strong baselines that lead to significant decreases in model performance in FL systems.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00