Federated Learning with Sparsification-Amplified Privacy and Adaptive Optimization
Federated Learning with Sparsification-Amplified Privacy and Adaptive Optimization
Rui Hu, Yanmin Gong, Yuanxiong Guo
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 1463-1469.
https://doi.org/10.24963/ijcai.2021/202
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other. However, data locality does not provide sufficient privacy protection, and it is desirable to facilitate FL with rigorous differential privacy (DP) guarantee. Existing DP mechanisms would introduce random noise with magnitude proportional to the model size, which can be quite large in deep neural networks. In this paper, we propose a new FL framework with sparsification-amplified privacy. Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee. Since sparsification would increase the number of communication rounds required to achieve a certain target accuracy, which is unfavorable for DP guarantee, we further introduce acceleration techniques to help reduce the privacy cost. We rigorously analyze the convergence of our approach and utilize Renyi DP to tightly account the end-to-end DP guarantee. Extensive experiments on benchmark datasets validate that our approach outperforms previous differentially-private FL approaches in both privacy guarantee and communication efficiency.
Keywords:
Data Mining: Federated Learning
Multidisciplinary Topics and Applications: Security and Privacy
Data Mining: Privacy Preserving Data Mining