Skip to main content

Fusion Target Attention Mask Generation Network For Video Segmentation

Yunyi Li, Fangping Chen, Fan Yang, Yuan Li, Huizhu Jia, Xiaodong Xie

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 05:06
27 Oct 2020

Video segmentation aims to segment target objects in a videosequence, which remains a challenge due to the motion anddeformation of objects. In this paper, we propose a novel attention-driven hybrid encoder-decoder network that generates object segmentation by fully leveraging spatial and temporal information. Firstly, a multi-branch network is designed to learn feature representation from object appearance, location and motion. Secondly, a target attention module is proposed to further exploit context information from learned representation. In addition, a novel edge loss is designed which constraints the model to generate salient edge features and accurate segmentation. The proposed model has been evaluated over two widely used public benchmarks, and experiments demonstrate its superior robustness and effectiveness as compared with the state of the arts.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00