Learning Optimal Oblique Decision Trees with (Max)SAT

Learning Optimal Oblique Decision Trees with (Max)SAT

Florent Avellaneda

Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 2558-2565. https://doi.org/10.24963/ijcai.2025/285

Decision trees are widely used in machine learning for their interpretability and effectiveness in classification tasks. Traditional axis-parallel decision trees partition data using single-feature thresholds at each node, but they often struggle to represent complex, non-axis-aligned decision boundaries efficiently. This limitation can result in unnecessarily large and less interpretable trees. Oblique decision trees address this limitation by using linear combinations of features at each node, allowing a more natural representation of complex decision boundaries while maintaining interpretability through sparse linear combinations. However, learning optimal oblique decision trees poses a significant computational challenge, as existing methods predominantly rely on suboptimal greedy heuristics. In this paper, we propose a novel approach to learning globally optimal oblique decision trees by reformulating the problem as a (Max)SAT instance. By leveraging state-of-the-art (Max)SAT solvers, our method efficiently explores the solution space to identify optimal trees. Experiments on benchmark datasets demonstrate that our approach generates optimal oblique decision trees within reasonable computational time for small to medium-sized datasets.
Keywords:
Constraint Satisfaction and Optimization: CSO: Applications
Constraint Satisfaction and Optimization: CSO: Modeling
Machine Learning: ML: Classification
Machine Learning: ML: Explainable/Interpretable machine learning