Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 00:05:43
21 Sep 2021

Most person re-identification methods rely solely on the pedestrian identity for learning. Person attributes, such as gender, clothing colors, carried bags, etc, are however seldom used. These attributes are highly identity-related and should be capitalized fully. Thus, we propose Attribute Parsing Network (APNet), an architecture designed for both image and person attribute learning and retrievals. To further enhance the re-id performance, we propose to leverage saliency maps and human parsing to boost the foreground features, which when trained together with the global and local networks, resulted in more generic and robust encoded representations. This proposed method achieved state-of-the-art accuracy performance on both Market1501 (87.3\% mAP and 95.2\% Rank1) and DukeMTMC-reID (78.8\% mAP and 89.2\% Rank 1) datasets.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: Free
    IEEE Members: $85.00
    Non-members: $100.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00
  • SPS
    Members: Free
    IEEE Members: $25.00
    Non-members: $40.00