Attention-based Multi-level Feature Fusion for Named Entity Recognition

Attention-based Multi-level Feature Fusion for Named Entity Recognition

Zhiwei Yang, Hechang Chen, Jiawei Zhang, Jing Ma, Yi Chang

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3594-3600. https://doi.org/10.24963/ijcai.2020/497

Named entity recognition (NER) is a fundamental task in the natural language processing (NLP) area. Recently, representation learning methods (e.g., character embedding and word embedding) have achieved promising recognition results. However, existing models only consider partial features derived from words or characters while failing to integrate semantic and syntactic information (e.g., capitalization, inter-word relations, keywords, lexical phrases, etc.) from multi-level perspectives. Intuitively, multi-level features can be helpful when recognizing named entities from complex sentences. In this study, we propose a novel framework called attention-based multi-level feature fusion (AMFF), which is used to capture the multi-level features from different perspectives to improve NER. Our model consists of four components to respectively capture the local character-level, global character-level, local word-level, and global word-level features, which are then fed into a BiLSTM-CRF network for the final sequence labeling. Extensive experimental results on four benchmark datasets show that our proposed model outperforms a set of state-of-the-art baselines.
Keywords:
Natural Language Processing: Information Extraction
Natural Language Processing: Tagging, chunking, and parsing
Natural Language Processing: Named Entities
Natural Language Processing: Natural Language Processing