Multimodal Video Saliency Analysis with User-biased Information
Jiangyue Xia, Jingqi Tian, Hui Qiao, Yichen Li, Jiangtao Wen, Yuxing Han
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 08:44
Video saliency is widely used in various video understanding and processing related applications. Despite the fact that studies have indicated the influence of user preferences on visual attention when watching videos, current researches on saliency are based on visual contents and have not taken viewer-related information into account. In this paper, we propose a learning-based multimodal framework to predict video saliency aided by social data analysis. We introduce a popularity assisted attention mechanism into a content-specific neural network to extract spatio-motion features, and utilize a convolutional long short-term memory (ConvLSTM) network to discover temporal characteristics. Experiments demonstrate that our approach outperforms the state-of-the-art video saliency analysis methods, which validates the effectiveness of incorporating external user-biased information into saliency prediction.