Exploiting Label Skewness for Spiking Neural Networks in Federated Learning

Exploiting Label Skewness for Spiking Neural Networks in Federated Learning

Di Yu, Xin Du, Linshan Jiang, Huijing Zhang, Shuiguang Deng

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 6895-6903. https://doi.org/10.24963/ijcai.2025/767

The energy efficiency of deep spiking neural networks (SNNs) aligns with the constraints of resource-limited edge devices, positioning SNNs as a promising foundation for intelligent applications leveraging the extensive data collected by these devices. To safeguard data privacy, federated learning (FL) facilitates collaborative SNN-based model training by leveraging data distributed across edge devices without transmitting local data to a central server. However, existing FL approaches encounter challenges in handling label-skewed data across devices, inducing drift in the local SNN model and consequently impairing the performance of the global SNN model. To tackle these problems, we propose a novel framework called FedLEC, which incorporates intra-client label weight calibration to balance the learning intensity across local labels and inter-client knowledge distillation to mitigate local SNN model bias caused by label absence. Extensive experiments with three different structured SNNs across five datasets (i.e., three non-neuromorphic and two neuromorphic datasets) demonstrate the efficiency of FedLEC. Compared to seven state-of-the-art FL algorithms, FedLEC achieves an average accuracy improvement of approximately 11.59% for the global SNN model under various label skew distribution settings.
Keywords:
Machine Learning: ML: Federated learning
Computer Vision: CV: Applications and Systems
Data Mining: DM: Class imbalance and unequal cost
Humans and AI: HAI: Brain sciences