Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:11:55
10 Jun 2021

Multi-agent reinforcement learning (MARL) has been widely applied in various cooperative tasks, where multiple agents are trained to collaboratively achieve global goals. During the training stage of MARL, inferring policies of other agents is able to improve the coordination efficiency. However, most of the existing policy inference methods require each agent to model all other agents separately, which results in quadratic growth of resource consumption as the number of agents increases. In addition, inferring the policy of an agent solely from its observations and actions may lead to failure of agent modeling. To address this issue, we propose to let each agent infer the others' policies with its own model, given that the agents are homogeneous. This self-inference approach significantly reduces the computation and storage consumption, and guarantees the quality of agent modeling. Experimental results demonstrate effectiveness of the proposed approach.

Chairs:
Seung-Jun Kim

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00