Skip to main content

Attentional Fused Temporal Transformation Network For Video Action Recognition

Ke Yang, Jie Jiang, Peng Qiao, Xin Niu, Dongsheng Li, Yong Dou, Huadong Dai, Tianlong Shen, Zhiyuan Wang

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 12:11
04 May 2020

Effective spatiotemporal feature representation is crucial to the video-based action recognition task. Focusing on discriminate spatiotemporal feature learning, we propose Attentional Fused Temporal Transformation Network (AttnTTN) for action recognition on top of popular Temporal Segment Network (TSN) framework. In the network, Attentional Fusion Module (AttnFM) is designed to fuse the appearance and motion features at multiple ConvNet levels for each video snippet, forming a short-term video descriptor. With fused features as inputs, Temporal Transformation Networks (TTN) are employed to model middle-term temporal transformation between the neighboring temporal snippets following a sequential order. AttnTTN achieves the state-of-the-art results on two most popular action recognition datasets: UCF101 and HMDB51.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00