Skip to main content

QRELATION: AN AGENT RELATION-BASED APPROACH FOR MULTI-AGENT REINFORCEMENT LEARNING VALUE FUNCTION FACTORIZATION

Siqi Shen, Jun Liu, Mengwei Qiu, Weiquan Liu, Cheng Wang, Yongquan Fu, Qinglin Wang, Peng Qiao

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:51
11 May 2022

The Centralized Training with Decentralized Execution paradigm (CTDE), which trains policies centrally with additional information, is important for Multi-Agent Reinforcement Learning (MARL). For CTDE, value function factorization methods make use of state during training and factorize the value function into multiple local value functions for decentralized execution. These approaches do not fully consider the relational information among agents, resulting in sub-optimal models for complex tasks. To remedy this issue, we propose QRelation which is a graph neural network approach for value function factorization. It considers both the static relations (e.g., agent types) and dynamic relations (e.g., close-by). We show that QRelation can obtain better results than state-of-the-art methods on challenging StarCraft II benchmarks.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00