DISTANCE-BASED ONLINE LABEL INFERENCE ATTACKS AGAINST SPLIT LEARNING
Junlin Liu (Beijing University of Posts and Telecommunications); Xinchen Lyu (Beijing University of Posts and Telecommunications)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Split learning is deemed as a promising paradigm for distributed learning at resource-constrained devices, where the learning model is split to be trained at the participants collaboratively. Different from federated learning that shares the entire gradients, split learning only requires to exchange the intermediate learning results (i.e., the extracted features/smashed data and gradients) at the cut layer, thereby necessitating distinct/new attack designs. Understanding the security performance of split learning is critical for various privacy-sensitive applications. With the emphasis on private labels, this paper proposes three label inference attacks based on the similarities between exchanged gradients and smashed data to the sample points. We mathematically analyze and unify the similarities (for retrieving the accurate labels) to the Euclidean distance. As a result, the attacks can be conducted online by finding the nearest sample point from the target data in the Euclidean space. Moreover, we first present that transfer learning may also be exploited to steal the labels direct from the raw data. Experimental results demonstrate that the proposed attacks can still recover the private labels against three state-of-the-art label protection methods.