Graphcomm: A Graph Neural Network Based Method For Multi-Agent Reinforcement Learning
Siqi Shen, Yongquan Fu, Huayou Su, Hengyue Pan, Qiao Peng, Yong Dou, Cheng Wang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:10:02
The communication among agents is important for Multi-Agent Reinforcement Learning (MARL). In this work, we propose GraphComm, a method makes use of the relationships among agents for MARL communication. GraphComm takes the explicit relations (e.g., agent types), which can be provided through some knowledge background, into account to better model the relationships among agents. Besides explicit relations, GraphComm considers implicit relations, which are formed by agent interactions. GraphComm use Graph Neural Networks (GNNs) to model the relational information, and use GNNs to assist the learning of agent communication. We show that GraphComm can obtain better results than state-of-the-art methods on the challenging StarCraft II unit micromanagement tasks through extensive experimental evaluation.
Chairs:
Seung-Jun Kim