What Does My GNN Really Capture? On Exploring Internal GNN Representations

What Does My GNN Really Capture? On Exploring Internal GNN Representations

Luca Veyrin-Forrer, Ataollah Kamal, Stefan Duffner, Marc Plantevit, Céline Robardet

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 747-752. https://doi.org/10.24963/ijcai.2022/105

Graph Neural Networks (GNNs) are very efficient at classifying graphs but their internal functioning is opaque which limits their field of application. Existing methods to explain GNN focus on disclosing the relationships between input graphs and model decision. In this article, we propose a method that goes further and isolates the internal features, hidden in the network layers, that are automatically identified by the GNN and used in the decision process. We show that this method makes possible to know the parts of the input graphs used by GNN with much less bias that SOTA methods and thus to bring confidence in the decision process.
Keywords:
AI Ethics, Trust, Fairness: Explainability and Interpretability
Data Mining: Frequent Pattern Mining
Machine Learning: Explainable/Interpretable Machine Learning
Machine Learning: Sequence and Graph Learning