Cross Domain Low-Dose Ct Image Denoising With Semantic information Alignment
Jiaxin Huang, Kecheng Chen, Jiayu Sun, Xiaorong Pu, Yazhou Ren
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:07:30
in this paper, we propose Ventriloquist-Net: A Talking Head Generation model that uses only a speech segment and a single source face image. It places emphasis on emotive expressions. Cues for generating these expressions are implicitly inferred from the speech clip only. We formulate our framework to comprise of independently trained modules to expedite convergence. This not only allows extension to datasets in a semi-supervised manner but also facilitates handling in-the-wild source images. Quantitative and qualitative evaluations on generated videos demonstrate state-of-the-art performance even on unseen input data. Implementation and supplementary videos are available at https://github.com/dipnds/VentriloquistNet.