An Asynchronous Updating Reinforcement Learning Framework for Task-oriented Dialog System
Sai Zhang (Beijing University of Posts and Telecommunications); Yuwei Hu (Beijing University of Posts and Telecommunications); Xiaojie Wang (Beijing University of Posts and Telecommunications); Caixia Yuan (Beijing University of Posts and Telecommunications)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Reinforcement learning has been applied to train the dialog systems in many works. Previous approaches divide the dialog system into multiple modules including DST (dialog state tracking) and DP (dialog policy), and train these modules simultaneously. However, different modules influence each other during training. The errors from DST might misguide the dialog policy, and the system action brings extra difficulties for the DST module. To alleviate this problem, we propose Asynchronous Updating Reinforcement Learning framework (AURL) that updates the DST module and the DP module asynchronously under a cooperative setting. Furthermore, curriculum learning is implemented to address the problem of unbalanced data distribution during reinforcement learning sampling, and multiple user models are introduced to increase the dialog diversity. Results on the public SSD-PHONE dataset show that our method achieves a compelling result with a 31.37% improvement on the dialog success rate. The code is publicly available via https://github.com/shunjiu/AURL.