Skip to main content

Audio-driven facial landmark generation in violin performance using 3DCNN network with self attention model

Ting-Wei Lin (Academia Sinica); Chao-Lin Liu (National Chengchi University); Li Su (Academia Sinica)

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
07 Jun 2023

In a music scenario, both auditory and visual elements are essential to achieve an outstandingperformance. Recent research has focused on the generation of body movements or fingering from audio in music performance. The audio-driven face generation technique in music performance is still deficient. In this paper, we compile a violin soundtrack and facial expression dataset (VSFE) for modeling facial expressions in violin performance. To our knowledge, this is the first dataset mapping the relationship between violin performance audio and musicians' facial expressions. We then propose a 3DCNN network with self-attention and residual blocks for audio-driven facial expression generation. In the experiments, we compare our methods with three baselines on talking face generation.

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00