Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:37
04 May 2020

Goal-oriented dialogue systems are now widely adopted in industry, where practical aspects of using them becomes of key importance. As such, it is expected from such systems to fit into a rapid prototyping cycle for new products and domains. For data-driven dialogue systems (especially those based on deep learning) that amounts to maintaining production-level performance having been provided with a few `seed' dialogue examples, normally referred to as data efficiency. With extremely data-dependent deep learning methods, the most promising way to achieve practical data efficiency is transfer learning---i.e., leveraging a greater, highly represented data source for training a base model, then fine-tuning it to available in-domain data. In this paper, we present a hybrid generative-retrieval model that can be trained using transfer learning. By using GPT-2 as the base model and fine-tuning it to the multi-domain MetaLWOz dataset, we obtain a robust dialogue model able to perform both response generation and ranking. Combining both, it outperforms several competitive generative-only and retrieval-only baselines, measured by language modeling quality on MetaLWOz as well as in goal-oriented metrics (Intent/Slot F1-scores) on the MultiWoz corpus.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00