Unsupervised and Few-Shot Parsing from Pretrained Language Models (Extended Abstract)
Unsupervised and Few-Shot Parsing from Pretrained Language Models (Extended Abstract)
Zhiyuan Zeng, Deyi Xiong
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Journal Track. Pages 6995-7000.
https://doi.org/10.24963/ijcai.2023/797
This paper proposes two Unsupervised constituent Parsing models (UPOA and UPIO) that calculate inside and outside association scores solely based on the self-attention weight matrix learned in a pretrained language model. The proposed unsupervised parsing models are further extended to few-shot parsing models (FPOA, FPIO) that use a few annotated trees to fine-tune the linear projection matrices in self-attention. Experiments on PTB and SPRML show that both unsupervised and few-shot parsing methods are better than or comparable to the previous methods.
Keywords:
Natural Language Processing: NLP: Tagging, chunking, and parsing
Natural Language Processing: General