FABA: An Algorithm for Fast Aggregation against Byzantine Attacks in Distributed Neural Networks

FABA: An Algorithm for Fast Aggregation against Byzantine Attacks in Distributed Neural Networks

Qi Xia, Zeyi Tao, Zijiang Hao, Qun Li

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 4824-4830. https://doi.org/10.24963/ijcai.2019/670

Many times, training a large scale deep learning neural network on a single machine becomes more and more difficult for a complex network model. Distributed training provides an efficient solution, but Byzantine attacks may occur on participating workers. They may be compromised or suffer from hardware failures. If they upload poisonous gradients, the training will become unstable or even converge to a saddle point. In this paper, we propose FABA, a Fast Aggregation algorithm against Byzantine Attacks, which removes the outliers in the uploaded gradients and obtains gradients that are close to the true gradients. We show the convergence of our algorithm. The experiments demonstrate that our algorithm can achieve similar performance to non-Byzantine case and higher efficiency as compared to previous algorithms.
Keywords:
Multidisciplinary Topics and Applications: Security and Privacy
Machine Learning: Deep Learning