DEEP VIDEO INPAINTING LOCALIZATION USING SPATIAL AND TEMPORAL TRACES
Shujin Wei, Haodong Li, Jiwu Huang
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:14:21
Advanced deep-learning-based video inpainting can fill a specified video region with visually plausible contents, usually leaving imperceptible traces. As inpainting can be used for malicious video manipulations, it has led to potential privacy and security issues. Therefore, it is necessary to detect and locate the video regions subjected to deep inpainting. This paper addresses this problem by exploiting the spatial and temporal traces left by inpainting. Firstly, the inpainting traces are enhanced by intra-frame and inter-frame residuals. In particular, we guide the extraction of inter-frame residual with optical-flow based frame alignment, which can better reveal the inpainting traces. Then, a dual-stream network, acting as the encoder, is designed to learn discriminative features from frame residuals. Finally, bidirectional convolutional LSTMs are embedded in the decoder network to produce pixel-wise predictions of inpainted regions for each frame. The proposed method is evaluated with tampered videos created by two state-of-the-art deep video inpainting algorithms. Extensive experimental results show that the proposed method can effectively localize the inpainted regions, outperforming existing methods.