Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:54
08 Jun 2021

In the multi-turn dialogue system, response generation is not only related to the sentences in the context but also relies on the words in each utterance. Although there are lots of methods that pay attention to model the relationship between words and utterances, there still exist problems such as tending to generate trivial responses. In this paper, we propose a hierarchical self-attention network, named HSAN, which attends to the important words and utterances in the context simultaneously. Firstly, we use a hierarchical encoder to update the word and utterance representations with the corresponding position information. Secondly, the response representations are updated by the mask self-attention module in the decoder. Finally, the relevance between utterances and response is computed by another self-attention module and used for the next response decoding process. In terms of automatic metrics and human judgments, experimental results show that HSAN significantly outperforms all baselines on two common public datasets.

Chairs:
Yang Liu

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00