SALE-MLP: Structure Aware Latent Embeddings for GNN to Graph-free MLP Distillation
SALE-MLP: Structure Aware Latent Embeddings for GNN to Graph-free MLP Distillation
Harsh Pal, Sarthak Malik, Rajat Patel, Aakarsh Malhotra
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 6003-6011.
https://doi.org/10.24963/ijcai.2025/668
Graph Neural Networks (GNNs), with their ability to effectively handle non-Euclidean data structures, have demonstrated state-of-the-art performance in learning node and graph-level representations. However, GNNs face significant computational overhead due to their message-passing mechanisms, making them impractical for real-time large-scale applications. Recently, Graph-to-MLP (G2M) knowledge distillation has emerged as a promising solution, utilizing MLPs to reduce inference latency. However, existing methods often lack structural awareness (SA), limiting their ability to capture essential graph-specific information. Moreover, some methods require access to large-scale graphs, undermining their scalability. To address these issues, we propose SALE-MLP (Structure-Aware Latent Embeddings for GNN-to-Graph-Free MLP Distillation), a novel graph-free and structure-aware approach that leverages unsupervised structural losses to align the MLP feature space with the underlying graph structure. SALE-MLP does not rely on precomputed GNN embeddings nor require graph during inference, making it efficient for real-world applications. Extensive experiments demonstrate that SALE-MLP outperforms existing G2M methods across tasks and datasets, achieving 3–4% improvement in node classification for inductive settings while maintaining strong transductive performance.
Keywords:
Machine Learning: ML: Sequence and graph learning
Machine Learning: ML: Deep learning architectures
Machine Learning: ML: Optimization
Machine Learning: ML: Representation learning
