A Gradient-Based Split Criterion for Highly Accurate and Transparent Model Trees

A Gradient-Based Split Criterion for Highly Accurate and Transparent Model Trees

Klaus Broelemann, Gjergji Kasneci

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2030-2037. https://doi.org/10.24963/ijcai.2019/281

Machine learning algorithms aim at minimizing the number of false decisions and increasing the accuracy of predictions. However, the high predictive power of advanced algorithms comes at the costs of transparency. State-of-the-art methods, such as neural networks and ensemble methods, result in highly complex models with little transparency. We propose shallow model trees as a way to combine simple and highly transparent predictive models for higher predictive power without losing the transparency of the original models. We present a novel split criterion for model trees that allows for significantly higher predictive power than state-of-the-art model trees while maintaining the same level of simplicity. This novel approach finds split points which allow the underlying simple models to make better predictions on the corresponding data. In addition, we introduce multiple mechanisms to increase the transparency of the resulting trees.
Keywords:
Machine Learning: Explainable Machine Learning
Machine Learning: Interpretability
Machine Learning: Classification