Reducing Communication for Split Learning by Randomized Top-k Sparsification

Reducing Communication for Split Learning by Randomized Top-k Sparsification

Fei Zheng, Chaochao Chen, Lingjuan Lyu, Binhui Yao

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 4665-4673. https://doi.org/10.24963/ijcai.2023/519

Split learning is a simple solution for Vertical Federated Learning (VFL), which has drawn substantial attention in both research and application due to its simplicity and efficiency. However, communication efficiency is still a crucial issue for split learning. In this paper, we investigate multiple communication reduction methods for split learning, including cut layer size reduction, top-k sparsification, quantization, and L1 regularization. Through analysis of the cut layer size reduction and top-k sparsification, we further propose randomized top-k sparsification, to make the model generalize and converge better. This is done by selecting top-k elements with a large probability while also having a small probability to select non-top-k elements. Empirical results show that compared with other communication-reduction methods, our proposed randomized top-k sparsification achieves a better model performance under the same compression level.
Keywords:
Machine Learning: ML: Federated learning
Machine Learning: ML: Learning sparse models
Multidisciplinary Topics and Applications: MDA: Security and privacy