Negative Flux Aggregation to Estimate Feature Attributions

Negative Flux Aggregation to Estimate Feature Attributions

Xin Li, Deng Pan, Chengyin Li, Yao Qiang, Dongxiao Zhu

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 446-454. https://doi.org/10.24963/ijcai.2023/50

There are increasing demands for understanding deep neural networks' (DNNs) behavior spurred by growing security and/or transparency concerns. Due to multi-layer nonlinearity of the deep neural network architectures, explaining DNN predictions still remains as an open problem, preventing us from gaining a deeper understanding of the mechanisms. To enhance the explainability of DNNs, we estimate the input feature's attributions to the prediction task using divergence and flux. Inspired by the divergence theorem in vector analysis, we develop a novel Negative Flux Aggregation (NeFLAG) formulation and an efficient approximation algorithm to estimate attribution map. Unlike the previous techniques, ours doesn't rely on fitting a surrogate model nor need any path integration of gradients. Both qualitative and quantitative experiments demonstrate a superior performance of NeFLAG in generating more faithful attribution maps than the competing methods. Our code is available at https://github.com/xinli0928/NeFLAG.
Keywords:
AI Ethics, Trust, Fairness: ETF: Explainability and interpretability
AI Ethics, Trust, Fairness: ETF: Trustworthy AI