Improving Reverberant Speech Training Using Diffuse Acoustic Simulation
Zhenyu Tang, Lianwu Chen, Bo Wu, Dong Yu, Dinesh Manocha
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 10:59
We present an efficient and realistic geometric acoustic simulation approach for generating and augmenting training data in speech-related machine learning tasks. Our physically-based acoustic simulation method is capable of modeling occlusion, specular and diffuse reflections of sound in complicated acoustic environments, whereas the classical image method can only model specular reflections in simple room settings. We show that by using our synthetic training data, the same neural networks gain significant performance improvement on real test sets in far-field speech recognition by 1.58% and keyword spotting by 21%, without fine-tuning using real impulse responses.