History, Present and Future: Enhancing Dialogue Generation with Few-shot History-Future Prompt
Yihe Wang (Wuhan University); Yitong Li (Huawei Technologies Co., Ltd.); Yasheng Wang (NoahArk Lab, Huawei); Fei Mi (Huawei); pingyi zhou (Noah’s Ark Lab, Huawei); Jin Liu (School of Computer Science, Wuhan University); Xin Jiang (Huawei Noah's Ark Lab); Qun Liu (Huawei Noah's Ark Lab)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Dialogue history and response in open-domain dialogue are loosely coupled. Generating informative responses solely based on the original dialogue history is not easy, as dialogue history may not contain enough information or it may contain irrelevant noises.
Intuitively, if a generation model can foresee possible dialogue future, or obtain real useful histories, it could generate more informative responses. In this paper, we propose a novel lightweight dialogue generation framework named few-shot history-future prompt that utilizes useful histories and simulated futures to help generate informative responses, without the need for fine-tuning or adding extra parameters. To obtain useful histories, we retrieve and combine relevant utterances from noisy multi-turn histories. Then we adopt a retrieval-generation hybrid approach to obtain diversified simulated futures. Such that our model could learn to condition on history combinations and simulated futures via few-shot learning. Experiments over publicly available datasets demonstrate that our method can help models generate better responses.