Private Stochastic Convex Optimization and Sparse Learning with Heavy-tailed Data Revisited

Private Stochastic Convex Optimization and Sparse Learning with Heavy-tailed Data Revisited

Youming Tao, Yulian Wu, Xiuzhen Cheng, Di Wang

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3947-3953. https://doi.org/10.24963/ijcai.2022/548

In this paper, we revisit the problem of Differentially Private Stochastic Convex Optimization (DP-SCO) with heavy-tailed data, where the gradient of the loss function has bounded moments. Instead of the case where the loss function is Lipschitz or each coordinate of the gradient has bounded second moment studied previously, we consider a relaxed scenario where each coordinate of the gradient only has bounded (1+v)-th moment with some v∈(0, 1]. Firstly, we start from the one dimensional private mean estimation for heavy-tailed distributions. We propose a novel robust and private mean estimator which is optimal. Based on its idea, we then extend to the general d-dimensional space and study DP-SCO with general convex and strongly convex loss functions. We also provide lower bounds for these two classes of loss under our setting and show that our upper bounds are optimal up to a factor of O(Poly(d)). To address the high dimensionality issue, we also study DP-SCO with heavy-tailed gradient under some sparsity constraint (DP sparse learning). We propose a new method and show it is also optimal up to a factor of O(s*), where s* is the underlying sparsity of the constraint.
Keywords:
Multidisciplinary Topics and Applications: Security and Privacy