Hierarchical Sequence Representation With Graph Network
Xiang Wu, Da Chen, Yuan He, Hui Xue, Jianfeng Dong, Feng Mao
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 13:23
Video classification problem is a challenging task in computer vision. The performance of this task is highly relied on the scale of training data and the effectiveness of video embedding via a robust embedding network. Unsupervised solutions such as feature average pooling technique, as a simple label-independent and parameter-free based method, cannot efficiently represent the video sequences. While supervised methods, such as RNN, can improve the recognition accuracy. The performance of RNN based methods, however, is decreased with the increasing length of the videos and the hierarchical relationships between frames across events in the video. In this paper, we propose a novel video classification method based on a deep convolutional graph neural network (DCGN). The proposed method utilizes the characteristics of the hierarchical structure of the video, and performed multi-level embedding feature extraction on the video frame sequence through the graph network, and obtained a video representation which reflects the event semantics hierarchically. Experiments on YouTube-8M Large-Scale Video Understanding dataset show that our proposed model outperforms the commonly used RNN based models, verifying its effectiveness for video classification.