Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:10:09
09 Jun 2021

Using only one deep model for cross scene video foreground segmentation is still very challenging because existing methods are scene-dependent, which restricts the consistent segmentation. In this paper, we propose a cross scene video foreground segmentation framework to extend the supervised model's generalization capability depending on scene-specific training. The proposed framework flexibly utilizes 3 well-trained supervised models as guidance to yield a coarse segmentation mask. The co-occurrence probability-based unsupervised background subtraction model is introduced to achieve scene adaptation in the plug and play style without any fine-tuning and labels. Experimental results on LIMU and CDNet2014 datasets validate our framework outperforms the state-of-the-art supervised/unsupervised approaches that participate in the comparison. Experiments also show the training efficiency-related improvements -- when introducing the guidance models, the demand for quantity and quality of training samples to train the unsupervised model is reduced. Codes: https://github.com/MeteoorLiu/Venus/tree/MeteoorLiu-SUMC

Chairs:
Désiré Sidibé

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00