FedCPD:Personalized Federated Learning with Prototype-Enhanced Representation and Memory Distillation
FedCPD:Personalized Federated Learning with Prototype-Enhanced Representation and Memory Distillation
Kaili Jin, Li Xu, Xiaoding Wang, Sun-Yuan Hsieh, Jie Wu, Limei Lin
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5498-5507.
https://doi.org/10.24963/ijcai.2025/612
Federated learning, as a distributed learning framework, aims to develop a global model while preserving client privacy. However, heterogeneity of client data leads to fairness issues and reduced performance. Techniques like parameter decoupling and prototype learning appear promising, yet challenges such as forgetting historical data and limited generalization persist. These methods also lack local insights, with locally trained features prone to overfitting, which affects generalization in global parameter aggregation. To address these challenges, we propose FedCPD, a personalized federated learning framework. FedCPD maintains historical information, reduces information loss, and increases personalization through hierarchical feature distillation and cross-layer feature fusion. Moreover, we utilize representation techniques like prototype contrastive learning and prototype alignment to capture diverse client data features, thus improving model generalization and fairness. Experiments show FedCPD outperforms state-of-the-art models, enhancing generalization by up to 10.40% and personalization by up to 4.90%, highlighting its effectiveness and superiority.
Keywords:
Machine Learning: ML: Representation learning
AI Ethics, Trust, Fairness: ETF: Fairness and diversity
Multidisciplinary Topics and Applications: MTA: Ubiquitous computing cystems
