Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:06:01
11 Jun 2021

Contextual information has been widely utilized in visual recognition tasks. This is especially true for action recognition, because contextual information such as objects interacting with human and the scene where the action is performed is inseparable from action categories. To this end, we propose an efficient relation module that combines Human-Object and Scene-Object relations for action recognition. Specifically, Human-Object interaction submodule can capture more accurate appearance and spatial relation to build human-object interaction pairs. And Scene-Object interaction submodule can learn the probability of the objects involved in the scene to help discover the key interaction pair. We conduct extensive experiments on Stanford 40 and Pascal Voc 2012 Action datasets to verify our model, and experimental results show that our method achieves superior performance on these two datasets. Especially, we gain the best results on the Stanford 40 dataset compared with state-of-the-arts.

Chairs:
Désiré Sidibé