Skip to main content
  • SPS
    Members: Free
    IEEE Members: $11.00
    Non-members: $15.00
    Length: 09:25
07 Jul 2020

With more than 300 million people depressed worldwide annually, depression is a global problem. The goal of depression detection is to improve diagnostic accuracy and availability, leading to faster intervention. The most important and challenging problem here is to design an effective and robust depression detection model. To this end, there are two challenges to overcome: 1) Multi-modal (audio, image, text, etc.) information must be jointly considered to make accurate inferences. 2) Existing deep learning-based work suffers from multi-modal data sufficiency problem. To address these issues, we propose a graph attention model embedded with multi-modal knowledge for depression detection. This approach learns not only reasonable embeddings for nodes in the knowledge graph, but also exploits medical knowledge to improve the performance of classification and prediction with the knowledge attention mechanism. Experimental results on two real-world datasets show that the proposed approach significantly improves the classification and prediction performance compared with other major state-of-the-art approaches, with guaranteed the robustness with each modality of multi-modal data. Overall, this paper shows how multi-modal knowledge attention mechanism and deep-learning-based networks can be combined to assist mental health patients and practitioners.

Value-Added Bundle(s) Including this Product

More Like This

  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00
  • SPS
    Members: $150.00
    IEEE Members: $250.00
    Non-members: $350.00