Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:09:39
11 Jun 2021

Recent attempts show that factorizing 3D convolutional filters into separate spatial and temporal components brings impressive improvement in action recognition. However, traditional temporal convolution operating along the temporal dimension will aggregate unrelated features, since the feature maps of fast-moving objects have shifted spatial positions. In this paper, we propose a novel and effective Multi-Directional Convolution (MDConv), which extracts features along different spatial-temporal orientations. Especially, MDConv has the same FLOPs and parameters as the traditional 1D temporal convolution. Also, we propose the Spatial-Temporal Features Pyramid Module (STFPM) to fuse spatial semantics in different scales in a light-weight way. Our extensive experiments show that the models which integrate with MDConv achieve better accuracy on several large-scale action recognition benchmarks such as Kinetics, AVA and SomethingSomething V1&V2 datasets.

Chairs:
Désiré Sidibé

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00