Differentially Private Correlation Alignment for Domain Adaptation

Differentially Private Correlation Alignment for Domain Adaptation

Kaizhong Jin, Xiang Cheng, Jiaxi Yang, Kaiyuan Shen

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 3649-3655. https://doi.org/10.24963/ijcai.2021/502

Domain adaptation solves a learning problem in a target domain by utilizing the training data in a different but related source domain. As a simple and efficient method for domain adaptation, correlation alignment transforms the distribution of the source domain by utilizing the covariance matrix of the target domain, such that a model trained on the transformed source data can be applied to the target data. However, when source and target domains come from different institutes, exchanging information between the two domains might pose a potential privacy risk. In this paper, for the first time, we propose a differentially private correlation alignment approach for domain adaptation called PRIMA, which can provide privacy guarantees for both the source and target data. In PRIMA, to relieve the performance degradation caused by perturbing the covariance matrix in high dimensional setting, we present a random subspace ensemble based covariance estimation method which splits the feature spaces of source and target data into several low dimensional subspaces. Moreover, since perturbing the covariance matrix may destroy its positive semi-definiteness, we develop a shrinking based method for the recovery of positive semi-definiteness of the covariance matrix. Experimental results on standard benchmark datasets confirm the effectiveness of our approach.
Keywords:
Multidisciplinary Topics and Applications: Security and Privacy
Machine Learning: Transfer, Adaptation, Multi-task Learning