Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:32
22 Sep 2021

Action recognition in top-view 360 degree videos is an emerging research topic in computer vision. Existing work utilizes a global projection method to transform 360 degree video frames to panorama frames for further processing. However, this unwrapping suffers from a problem of geometric distortion i.e., people present near the centre in the 360 degree video frames appear highly stretched and distorted in the corresponding panorama frames (observed in 37.5% of the total panorama frames for 360Action dataset). Thus, recognizing the actions of people who are near the centre becomes difficult, thereby affecting the overall action recognition performance. In this work, we overcome the above challenge by utilizing distortion-free person-centric images of the persons near the centre, extracted directly from the input 360 degree video frames. We propose a simple yet effective hybrid two-stream architecture consisting of a panorama stream and a person-centric stream where predictions from both streams are combined together to detect the overall actions in a video. We perform experiments to validate the efficacy of the proposed method on the recently introduced 360Action dataset and achieve an overall improvement of 2.3% mAP compared to state-of-the-art method and a maximum improvement of 22.7% AP for pickup action, which happens mostly near the centre.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00