A Goal-Driven Tree-Structured Neural Model for Math Word Problems

A Goal-Driven Tree-Structured Neural Model for Math Word Problems

Zhipeng Xie, Shichao Sun

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 5299-5305. https://doi.org/10.24963/ijcai.2019/736

Most existing neural models for math word problems exploit Seq2Seq model to generate solution expressions sequentially from left to right, whose results are far from satisfactory due to the lack of goal-driven mechanism commonly seen in human problem solving. This paper proposes a tree-structured neural model to generate expression tree in a goal-driven manner. Given a math word problem, the model first identifies and encodes its goal to achieve, and then the goal gets decomposed into sub-goals combined by an operator in a top-down recursive way. The whole process is repeated until the goal is simple enough to be realized by a known quantity as leaf node. During the process, two-layer gated-feedforward networks are designed to implement each step of goal decomposition, and a recursive neural network is used to encode fulfilled subtrees into subtree embeddings, which provides a better representation of subtrees than the simple goals of subtrees. Experimental results on the dataset Math23K have shown that our tree-structured model outperforms significantly several state-of-the-art models.
Keywords:
Natural Language Processing: Natural Language Processing
Machine Learning: Deep Learning
Machine Learning Applications: Applications of Supervised Learning