DEEP VIDEO PREDICTION THROUGH SPARSE MOTION REGULARIZATION
Yung-Han Ho, Chih-Chun Chan, Wen-Hsiao Peng
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 12:10
This paper introduces data-dependent sparse motion regularization for dense flow-based video prediction. To achieve video prediction (a form of extrapolation from past frames), the dense flow-based model estimates a motion vector for every pixel in a target frame for backward warping. Due to the sheer amount of motion vectors to be estimated, the model tends to be complex, thereby calling for proper regularization to avoid over-fitting. Most flow-based models adopt smoothness regularization. However, the smoothness requirement is detrimental to preserving the discontinuity of the motion field, which often appears in videos with distinct object motion. To address this issue, our sparse motion regularization discovers distinct sparse motion via weighted K-means clustering and regularizes the model based on minimizing clustering errors in the predicted motion field. When incorporated in an end-to-end trainable deep video prediction model, our scheme outperforms smoothness regularization, achieving superiority over direct generation-based video prediction on UCF-101 and Common Intermediate Format (CIF) datasets.