Skip to main content

Modeling Global Latent Semantic in Multi-Turn Conversations with Random Context Reconstruction

Chengwen Zhang (Beijing University of Posts & Telecommunications); Danqin Wu (Beijing University of Posts & Telecommunications)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

In multi-turn dialogue generation, it’s still a challenge to generate coherent responses given dialogue history, which requires neural models to learn complex semantic structures between responses and contexts. It’s practical that a multi-turn conversation takes place under a constant background, such as dialog scene, style and topic, which have an important effect on the distribution of words and phrases in conversations. However, such global semantics haven’t been exploited in recent Transformer-based dialog models. In this paper, we propose a Global semantic-guided Variational Dialog (GVDialog) model, which introduces a Variational Autoencoder (VAE) into basic Transformer-based hierarchical dialogue models and use Random Context Reconstruction (RCR) task to compress global semantics into latent space without any time-consuming human annotation. The latent variables, interpreted as holistic attributes of dialogue history, could guide the response decoder to generate coherent utterances from a global level. Experiments are conducted on Chinese data Douban and English data Cornell Movie. Evaluation results show the effectiveness and superiority of GVDialog compared to other hierarchical dialog models.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00