Local-Global Feature For Video-Based One-Shot Person Re-Identification
Chao Zhao, Zhenyu Zhang, Jian Yang, Yan Yan
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 14:05
One-shot video-based re-identification, which uses only one labeled tracklet for each identity, is challenging since the framework usually suffers misalignment and inefficient utilizing of unlabeled data. In this paper we propose a novel local-global progressive learning framework to overcome the limitations. To obtain robust features in a tracklet, we first design sub-networks to learn four discriminative part-based feature maps and one global feature map which is insensitive to misalignment. Then a novel adaptive loss is proposed to balance the part-based and global feature properly. To utilize unlabeled data, our framework gradually select most reliable pseudo-labeled tracklets to the training set for iterative training. Extensive experiments are conducted on two video-based Re-ID datasets, MARS and DukeMTMC-VideoReID. The mAP of our model outperforms the state-of-the-art methods by 20.8% on the DukeMTMC-VideoReID dataset.