Secure Federated Averaging Algorithm with Differential Privacy
Yiwei Li,Tsung-Hui Chang,Chong-Yung Chi
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 13:51
Federated learning (FL), as a recent advance of distributed machine learning, is capable of learning a model over the network without directly accessing the client's raw data. Nevertheless, the clients' sensitive information can still be exposed to adversaries via differential attacks on messages exchanged between the parameter server and clients. In this paper, we consider the widely used federating averaging (FedAvg) algorithm and propose to enhance the data privacy by the differential privacy (DP) technique, which obfuscates the exchanged messages by adding proper Gaussian noise. We analytically show that the proposed secure FedAvg algorithm maintains a O(1/T) convergence rate, where T is the total number of steps of stochastic gradient descent (SGD) per client. Moreover, we demonstrate how various algorithm parameters can impact on the algorithm communication efficiency. Experiment results are presented to examine the practical performance of the proposed algorithm.