Communication-efficient and Scalable Decentralized Federated Edge Learning

Communication-efficient and Scalable Decentralized Federated Edge Learning

Austine Zong Han Yapp, Hong Soo Nicholas Koh, Yan Ting Lai, Jiawen Kang, Xuandi Li, Jer Shyuan Ng, Hongchao Jiang, Wei Yang Bryan Lim, Zehui Xiong, Dusit Niyato

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Demo Track. Pages 5032-5035. https://doi.org/10.24963/ijcai.2021/720

Federated Edge Learning (FEL) is a distributed Machine Learning (ML) framework for collaborative training on edge devices. FEL improves data privacy over traditional centralized ML model training by keeping data on the devices and only sending local model updates to a central coordinator for aggregation. However, challenges still remain in existing FEL architectures where there is high communication overhead between edge devices and the coordinator. In this paper, we present a working prototype of blockchain-empowered and communication-efficient FEL framework, which enhances the security and scalability towards large-scale implementation of FEL.
Keywords:
Machine Learning: General