Abstract

Simple Training of Dependency Parsers via Structured Boosting

Simple Training of Dependency Parsers via Structured Boosting

Qin Wang, Qin Iris Wang, Dekang Lin, Dale Schuurmans

Recently, significant progress has been made on learning structured predictors via coordinated training algorithms such as conditional random fields and maximum margin Markov networks. Unfortunately, these techniques are based on specialized training algorithms, are complex to implement, and expensive to run. We present a much simpler approach to training structured predictors by applying a boosting-like procedure to standard supervised training methods. The idea is to learn a local predictor using standard methods, such as logistic regression or support vector machines, but then achieve improved structured classification by "boosting" the influence of misclassified components after structured prediction, re-training the local predictor, and repeating. Further improvement in structured prediction accuracy can be achieved by incorporating "dynamic" features —i.e. an extension whereby the features for one predicted component can depend on the predictions already made for some other components.

We apply our techniques to the problem of learning dependency parsers from annotated natural language corpora. By using logistic regression as an efficient base classifier (for predicting dependency links between word pairs), we are able to efficiently train a dependency parsing model, via structured boosting, that achieves state of the art results in English, and surpasses state of the art in Chinese.

URL: http://www.cs.ualberta.ca/~wqin/papers/ijcai07.pdf