UCorrect: An Unsupervised Framework for Automatic Speech Recognition Error Correction
Jiaxin GUO (Huawei); Minghan Wang (Huawei); Xiaosong Qiao (Huawei); Daimeng Wei (Huawei); Hengchao Shang (HW-TSC); ZongYao LI (HW-TSC); Zhengzhe YU (HW-TSC); Yinglu Li (HUAWEI TECHNOLOGIES CO., LTD.); Chang Su (Huawei); Min Zhang (Huawei); Shimin Tao (Huawei); Hao Yang (Huawei)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Error correction techniques have been used to refine the output sentences from automatic speech recognition (ASR) models and achieve a lower word error rate (WER). Previous works usually adopt end-to-end models and has strong dependency on Pseudo Paired Data and Original Paired Data. But when only pre-training on Pseudo Paired Data, previous models have negative effect on correction. While fine-tuning on Original Paired Data, the source side data must be transcribed by a well-trained ASR model, which takes a lot of time and not universal. In this paper, we propose UCorrect, an unsupervised Detector-Generator-Selector framework for ASR Error Correction. UCorrect has no dependency on the training data mentioned before. The whole procedure is first to detect whether the character is erroneous, then to generate some candidate characters and finally to select the most confident one to replace the error character. Experiments on the public AISHELL-1 dataset and WenetSpeech dataset show the effectiveness of UCorrect for ASR error correction: 1) it achieves significant WER reduction, achieves 6.83\% even without fine-tuning and 14.29\% after fine-tuning; 2) it outperforms the popular NAR correction models by a large margin with a competitive low latency; and 3) it is an universal method, as it reduces all WERs of the ASR model with different decoding strategies and reduces all WERs of ASR models trained on different scale datasets.