GENERAL OR SPECIFIC? INVESTIGATING EFFECTIVE PRIVACY PROTECTION IN FEDERATED LEARNING FOR SPEECH EMOTION RECOGNITION
Chao Tan (Kyoto University ); Yang Cao (Hokkaido University); Sheng Li (National Institute of Information & Communications Technology (NICT)); Masatoshi Yoshikawa (Kyoto University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Federated Learning (FL) is considered as a new paradigm of privacy-presering machine learning since the server trains a machine learning model in a distributed way without collecting clients' raw data but only local models. However, recent studies show that FL suffers inference attacks. Sensitive information can still be inferred from the shared local models. In this work, we investigate the effectiveness of existing rigorous privacy-enhancing techniques, i.e., user-level differential privacy (UDP) and Voice-Indistinguishability (Voice-Ind), for enhancing FL in the scenario of Speech Emotion Recognition (SER), against gender inference attacks. UDP is a \textit{general-purpose} privacy notion, whereas Voice-Ind is proposed for protecting voiceprint. In addition, we propose a new privacy notion Gender-Indistinguishability (Gender-Ind), which is specifically designed for protecting gender information in speech data, and test its privacy-utility tradeoff compared with the above two privacy notions. The experiments reveal that our specifically designed privacy notion, Gender-Ind, can achieve better utility while preventing against the same level of attacks. This finding sheds some light on how to design privacy protection methods in the speech data processing.