LEARNING TORSO PRIOR FOR CO-SPEECH GESTURE GENERATION WITH BETTER HAND SHAPE
Hexiang Wang, Fengqi Liu, Ran Yi, Lizhuang Ma
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Co-speech gesture generation is the task of synthesizing gesture sequences synchronized with an input audio signal. Previous methods try to estimate upper body gesture as a whole, ignoring the different mapping relations between audio and different body parts, which leads to poor overall results especially bad hand shapes. In this paper, we propose a novel three-branch co-speech gesture generation framework to obtain better results. In particular, we propose a Torso2Hand Prior Learning module (T2HPL) to leverage torso information as an extra prior to enhance hand pose prediction, and carefully design a hand shape discriminator to improve the authenticity of generated hand shape. In addition, an arm orientation loss is designed to encourage the network to generate torso part with better semantic expressiveness. Experiments on dataset of four different speakers demonstrate the superiority of our method over the state-of-the-art approaches.