Integrating multiple ASR systems into NLP backend with attention fusion
Takatomo Kano, Atsunori Ogawa, Marc Delcroix, Shinji Watanabe
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:04
Spoken language processing (SLP) systems such as speech summarization and translation can be achieved by cascade models. It combines an automatic speech recognition (ASR) frontend and a natural language processing (NLP) backend including machine translation (MT) or text summarization (TS). With this cascade approach, we can exploit large non-paired datasets to independently train state-of-the-art models for each module. However, ASR errors directly affect the performance of the NLP backend in the cascade approach. In this paper, we reduce the impact of ASR errors on the NLP backend by combining transcriptions from various ASR systems. Recognizer output voting error reduction (ROVER) is a widely used technique for system combination. Although ROVER improves ASR performance, the combination process is not optimized for backend tasks. We propose a system combination that resembles ROVER using attention-based fusion to achieve the alignment and the combination of multiple ASR hypotheses. This allows the combination process to be optimized for the backend NLP task without changing the ASR frontend. Our proposed technique is general and can be applied to various SLP tasks. We confirm its effectiveness on both speech summarization and translation experiments.