Lifi: Towards Linguistically Informed Frame Interpolation
Aradhya Mathur, Devansh Batra, Yaman Kumar Singla, Rajiv Ratn Shah, Changyou Chen, Roger Zimmermann
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:12:44
Here we explore the problem of speech video interpolation. With close to 70% of web traffic, such content today forms the primary form of online communication and entertainment. Despite high performance on conventional metrics like MSE, PSNR, and SSIM, we find that the state-of-the-art frame interpolation models fail to produce faithful speech interpolation. For instance, we observe the lips stay static while the person is still speaking for most interpolated frames. With this motivation, using the information of words, sub-words, and visemes, we provide a new set of linguistically informed metrics targeted explicitly to the problem of speech video interpolation. We release several datasets to test video interpolation models of their speech understanding. We also design linguistically informed deep learning video interpolation algorithms to generate the missing frames.
Chairs:
Mahnoosh Mehrabani