Skip to main content

A Triplet Appearance Parsing Network For Person Re-Identification

Mingfu Xiong, Zhongyuan Wang, Ruhan He, Xinrong Hu, Ming Cheng, Xiao Qin, Jia Chen

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:11:14
10 Jun 2021

As one of the specific vision tasks, person re-identification has become a prevalent research topic in the field of multimedia and computer vision. However, existing feature extraction methods, originating from the quality of the bounding boxes which could cause the inhomogeneity and incoherence of person representation for cluttered backgrounds, are difficult to adapt the challenges of the harsh real-world scenarios. This study develops a Triplet person Appearances Parsing Framework (TAPF) which eliminates the surrounding interference factors of bounding boxes for person re-identification. The framework consists of a triplet person parsing network and an integration mechanism for person local and global appearance information. Concretely, the triplet parsing network includes a channel parsing module, a position parsing module and a color parsing module, which are used to extract the person channel parsing descriptor, regional descriptor and color perception descriptor, respectively. Then, a local and global flatten gaussian operations are performed to integrate the person appearance parsing descriptors to obtain more discriminative features for the person representation. The experimental results have been conducted to validate our proposed algorithm can achieve a better performance for person re-identification on several public datasets, i.e., VIPeR and Market-1501, respectively.

Chairs:
Patrick Le Callet

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00