Model Conversion via Differentially Private Data-Free Distillation

Model Conversion via Differentially Private Data-Free Distillation

Bochao Liu, Pengju Wang, Shikun Li, Dan Zeng, Shiming Ge

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 2187-2195. https://doi.org/10.24963/ijcai.2023/243

While massive valuable deep models trained on large-scale data have been released to facilitate the artificial intelligence community, they may encounter attacks in deployment which leads to privacy leakage of training data. In this work, we propose a learning approach termed differentially private data-free distillation (DPDFD) for model conversion that can convert a pretrained model (teacher) into its privacy-preserving counterpart (student) via an intermediate generator without access to training data. The learning collaborates three parties in a unified way. First, massive synthetic data are generated with the generator. Then, they are fed into the teacher and student to compute differentially private gradients by normalizing the gradients and adding noise before performing descent. Finally, the student is updated with these differentially private gradients and the generator is updated by taking the student as a fixed discriminator in an alternate manner. In addition to a privacy-preserving student, the generator can generate synthetic data in a differentially private way for other down-stream tasks. We theoretically prove that our approach can guarantee differential privacy and well convergence. Extensive experiments that significantly outperform other differentially private generative approaches demonstrate the effectiveness of our approach.
Keywords:
Data Mining: DM: Privacy-preserving data mining
Computer Vision: CV: Bias, fairness and privacy