Skip to main content

CONTEXT-AWARE GENERATION-BASED NET FOR MULTI-LABEL VISUAL EMOTION RECOGNITION

Shulan Ruan, Kun Zhang, Yijun Wang, Hanqing Tao, hwd, Weidong He, Guangyi Lv, Enhong Chen

  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 06:30
09 Jul 2020

Visual Emotion Recognition has attracted more and more research attention in recent years. Existing approaches mainly depend on facial expression or analyze the whole image between positive and negative. Actually, people can recognize multiple emotions from one image based on global and local information. In this paper, we propose a Context-Aware Generation-Based Net (CAGBN), a novel architecture that makes full use of global and local information of the image by considering both the whole image and details of the target person. Inspired by psychological studies that when viewing a person in his situation, we tend to give judgments gradually rather than assign all labels at the same time, CAGBN transforms the multi-label classification problem into a sequence generation task for better recognition. Extensive experimental results on the emotion recognition dataset demonstrate the superiority and rationality of CAGBN.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00