Byzantine-Robust Aggregation with Gradient Difference Compression and Stochastic Variance Reduction for Federated Learning
Heng Zhu, Qing Ling
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:13:02
We investigate the problem of Byzantine-robust compressed federated learning, where the transmissions from the workers to the master node are compressed, and subject to malicious attacks from an unknown number of Byzantine workers. We show that the vanilla combination of the distributed compressed stochastic gradient descent (SGD) with geometric median-based robust aggregation suffers from the compression noise under Byzantine attacks. In light of this observation, we propose to reduce the compression noise with gradient difference compression to improve the Byzantine-robustness. We also observe the impact of the intrinsic stochastic noise from selecting random samples, and adopt the stochastic average gradient algorithm (SAGA) to gradually eliminate the inner variations of regular workers. We prove that the proposed algorithm reaches a neighborhood of the optimal solution at a linear convergence rate, and the asymptotic learning error is in the same order as that of the state-of-the-art uncompressed method. Finally, numerical experiments demonstrate the effectiveness of the proposed method.