Ventriloquist-Net: Leveraging Speech Cues For Emotive Talking Head Generation
Deepan Das, Qadeer Khan, Daniel Cremers
-
SPS
IEEE Members: $11.00
Non-members: $15.00Length: 00:12:22
As neural networks get deeper and more computationally intensive, model quantization has emerged as a promising compression tool offering lower computational costs with limited performance degradation, enabling deployment on edge devices. Meanwhile, recent studies have shown that neural network models are vulnerable to various security and privacy threats. Among these, membership inference attacks (MIAs) are capable of breaching user privacy by identifying training data from neural network models. %MIA is easy to implement and requires limited access to the model. This paper investigates the impact of model quantization on the resistance of neural networks against MIA through empirical studies. We demonstrate that quantized models are less likely to leak private information of training data than their full precision counterparts. Our experimental results show that the precision MIA attack on quantized models is 7 to 9 points lower than their counterparts when the recall is the same. To the best of our knowledge, this paper is the first work to study the implication of model quantization on the resistance of neural network models against MIA.