DAILY MENTAL HEALTH MONITORING FROM SPEECH: A REAL-WORLD JAPANESE DATASET AND MULTITASK LEARNING ANALYSIS
Meishu Song (University of Augsburg); Andreas Triantafyllopoulos (University of Augsburg); Zijiang Yang (University of Augsburg); Hiroki Takeuchi (University of Tokyo); Toru Nakamura (Osaka University ); Akifumi Kishi (University of Tokyo); Tetsro Ishizawa (University of Tokyo); Kazuhiro Yoshiuchi (University of Tokyo); Xin Jing (Universität Augsburg); Zhonghao Zhao (Beijing Institute of Technology); Vincent Karas (University of Augsburg); Kun Qian (Beijing Institute of Technology); Bin Hu (Beijing Institute of Technology); Bjorn W. Schuller (Imperial College London); Yamamoto Yoshiharu (University of Tokyo)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Translating mental health recognition from clinical research into real-world applications requires extensive data, yet existing emotion datasets are impoverished in terms of daily mental health monitoring, especially when aiming for self-
reported anxiety and depression recognition. We introduce the Japanese Daily Speech Dataset (JDSD), a large in-the-wild daily speech emotion dataset consisting of 20,827 speech samples from 342 speakers and 54 hours of total duration. The
data is annotated on the Depression and Anxiety Mood Scale (DAMS) – 9 self-reported emotions to evaluate mood state including “vigorous”, “gloomy”, “concerned”, “happy”, “unpleasant”, “anxious”, “cheerful”, “depressed”, and “worried”.
Our dataset possesses emotional states, activity, and time diversity, making it useful for training models to track daily emotional states for healthcare purposes. We partition our corpus and provide a multi-task benchmark across nine emotions, demonstrating that mental health states can be predicted reliably from self-reports with a Concordance Correlation Coefficient value of .547 on average. We hope that JDSD will become a valuable resource to further the development of daily
emotional healthcare tracking.