Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:12:02
11 May 2022

Federated reinforcement learning (FRL) combines multi-agent reinforcement learning (MARL) and federated learning (FL) so that multiple agents can exchange messages with a central server for cooperatively learning their local policies. However, a number of malicious agents may deliberately modify the messages transmitted to the central server so as to hinder the learning process, which is often described by the Byzantine attacks model. To address this issue, we propose to employ robust aggregation to replace the simple average aggregation rule in FRL and enhance Byzantine robustness. To be specific, we focus on the episodic task where the environment and agents are reset in the beginning each episode. First, we extend deep deterministic policy gradient (DDPG) to FRL (termed as F-DDPG), which maintains a global critic and multiple local actors, and is thus computation- and communication-efficient. Then, we introduce geometric median and median to aggregate the gradients received from the agents and propose RF-DDPG, a class of Byzantine-robust FRL methods. Finally, we conduct numerical experiments to validate the robustness of RF-DDPG to Byzantine attacks.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00