Optimal Importance Sampling For Federated Learning
Elsa Rizk, Stefan Vlaski, Ali H. Sayed
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:06:44
Federated learning involves a mixture of centralized and decentralized processing tasks, where a server regularly selects a sample of the agents, and these in turn sample their local data to compute stochastic gradients for their learning updates. The sampling of both agents and data is generally uniform; however, in this work we consider non-uniform sampling. We derive optimal importance sampling strategies for both agent and data selection and show that under convexity and Lipschitz assumptions, non-uniform sampling without replacement improves the performance of the original FedAvg algorithm. We run experiments on a regression and classification problem to illustrate the theoretical results.
Chairs:
Rainer Martin