A Multi-Modal Neural Geometric Solver with Textual Clauses Parsed from Diagram

A Multi-Modal Neural Geometric Solver with Textual Clauses Parsed from Diagram

Ming-Liang Zhang, Fei yin, Cheng-Lin Liu

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 3374-3382. https://doi.org/10.24963/ijcai.2023/376

Geometry problem solving (GPS) is a high-level mathematical reasoning requiring the capacities of multi-modal fusion and geometric knowledge application. Recently, neural solvers have shown great potential in GPS but still be short in diagram presentation and modal fusion. In this work, we convert diagrams into basic textual clauses to describe diagram features effectively, and propose a new neural solver called PGPSNet to fuse multi-modal information efficiently. Combining structural and semantic pre-training, data augmentation and self-limited decoding, PGPSNet is endowed with rich knowledge of geometry theorems and geometric representation, and therefore promotes geometric understanding and reasoning. In addition, to facilitate the research of GPS, we build a new large-scale and fine-annotated GPS dataset named PGPS9K, labeled with both fine-grained diagram annotation and interpretable solution program. Experiments on PGPS9K and an existing dataset Geometry3K validate the superiority of our method over the state-of-the-art neural solvers. Our code, dataset and appendix material are available at \url{https://github.com/mingliangzhang2018/PGPS}.
Keywords:
Knowledge Representation and Reasoning: KRR: Qualitative, geometric, spatial, and temporal reasoning
Machine Learning: ML: Multi-modal learning
Multidisciplinary Topics and Applications: MDA: Education