Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 14:58
04 May 2020

In federated learning, a centralized model is realized based on information received from a group of agents each collecting data. This setting has two major challenges: the agents observe data over different distributions and they have only limited capabilities of sending data over the network to the centralized unit. Therefore, sending all the training data over the network is impractical. Each agent must train its own model and decide what relevant information it needs to send to the centralized unit. In this work we propose a method for federated learning in which each agent learns a low complexity Reproducing kernel Hilbert space representation. Leveraging the zero duality gap and the fact that each dual variable is associated with a sample, the agent discards samples for which the optimal dual variable is zero and sends only fundamental samples to the centralized unit. The centralized unit then computes the global model. We show that as the sample size grows, the solution obtained by the central unit converges to that obtained by an omniscient classifier which has access to all samples from all agents. We illustrate the performance of our federated learning algorithm and compare it to the omniscient classifier with a simulation.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00