Learning Diverse Sub-Policies Via A Task-Agnostic Regularization On Action Distributions.
Liangyu Huo, Mai Xu, Zulin Wang, Yuhang Song
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:19
Automatic sub-policy discovery has recently received much attention in hierarchical reinforcement learning (HRL). The conventional approaches to learning sub-policies suffer from collapsing into just one sub-policy dominating the whole task, lacking techniques to ensure the diversity of different sub-policies. In this paper, we formulate the discovery of diverse sub-policies as a trajectory inference. Then, we propose an information-theoretic objective based on action distributions to encourage diversity. Moreover, two simplifications are derived on discrete and continuous action space for reducing the computation. Finally, the experimental results show that the proposed approach can further improve the state-of-the-art approaches without modifying existing hyperparameters on two different HRL domains, suggesting the wide applicability and robustness of our approach.