Large-Context Pointer-Generator Networks For Spoken-To-Written Style Conversion
Mana Ihori, Akihiko Takashima, Ryo Masumura
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:54
This paper introduces a spoken-to-written style conversion method that is suitable for handling a series of text such as discourses and conversations. Spoken-to-written style conversion can increase the readability of automatic speech recognition (ASR) outputs because ASR systems transcribe input speech into text in a literal manner; however, it generates several disfluencies and redundant expressions. The most successful method of text style conversion is sequence-to-sequence mapping using pointer-generator networks that possess a copy mechanism from an input sequence. However, pointer-generator networks cannot process a series of text serially because they are developed to handle isolated text. In fact, pointer-generator networks cannot consider relationships between current processing text and all preceding text. Therefore, this paper proposes large-context pointer-generator networks that combine pointer-generator networks with large-context encoder-decoder networks. In the proposed networks, all preceding written-style text can be considered to convert current spoken-style text into written-style text. In addition, the proposed networks introduce a large-context copy mechanism that can copy tokens from both current spoken-style text and preceding written-style text. Our experiments demonstrate the proposed networks yield better performance than conventional pointer-generator networks and large-context encoder-decoder networks.