FedPrompt: Communication-Efficient and Privacy-Preserving Prompt Tuning in Federated Learning
Haodong Zhao (Shanghai Jiao Tong University); Wei Du (Shanghai Jiao Tong University); Fangqi Li (SEIEE, Shanghai Jiao Tong University); Peixuan Li (Shanghai Jiao Tong University); Gongshen Liu (Shanghai Jiao Tong University)
-
SPS
IEEE Members: $11.00
Non-members: $15.00
Federated learning (FL) has enabled global model training on decentralized data in a privacy-preserving way. However, for tasks that utilize pre-trained language models (PLMs) with massive parameters, there are considerable communication costs. Prompt tuning, which tunes soft prompts without modifying PLMs, has achieved excellent performance as a new learning paradigm.
In this paper, we want to combine these methods and explore the effect of prompt tuning under FL. We propose "FedPrompt" studying prompt tuning in a model split aggregation way using FL, and prove that split aggregation greatly reduces the communication cost, only 0.01\% of the PLMs' parameters, with little decrease on accuracy both on IID and Non-IID data distribution.
We further conduct backdoor attacks by data poisoning on FedPrompt. Experiments show that attack achieve a quite low attack success rate and can not inject backdoor effectively, proving the robustness of FedPrompt.