Mat-Net: Representing Appearance-Irrelevant Warp Field By Multiple Affine Transformations
Jingwei Liu, Longquan Dai
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:08:42
Warp-based methods for image animation estimate a warp field what do a rearrangement on the pixels of the input image to roughly align with the target image. Current methods predict accurate warp field by using manually annotated data. In this paper, we propose a simple method (MAT-net) to predict more precise warp field in self-supervised way. MAT-net decomposes complex spatial object movement between two images into multiple simple local motions (i.e. affine transformation) occurring in different areas of images. Sequentially, our model calculates a warp field depicting complex object movement by combining all local motions. MAT-net encodes appearance-irrelevant object movement accurately. Compared to the state-of-the-art method, MAT-net generates more realistic images with faster inference speed. We published the source code of our project online.