Drop Redundant, Shrink Irrelevant: Selective Knowledge Injection for Language Pretraining
Drop Redundant, Shrink Irrelevant: Selective Knowledge Injection for Language Pretraining
Ningyu Zhang, Shumin Deng, Xu Cheng, Xi Chen, Yichi Zhang, Wei Zhang, Huajun Chen
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 4007-4014.
https://doi.org/10.24963/ijcai.2021/552
Previous research has demonstrated the power of leveraging prior knowledge to improve the performance of deep models in natural language processing. However, traditional methods neglect the fact that redundant and irrelevant knowledge exists in external knowledge bases. In this study, we launched an in-depth empirical investigation into downstream tasks and found that knowledge-enhanced approaches do not always exhibit satisfactory improvements. To this end, we investigate the fundamental reasons for ineffective knowledge infusion and present selective injection for language pretraining, which constitutes a model-agnostic method and is readily pluggable into previous approaches. Experimental results on benchmark datasets demonstrate that our approach can enhance state-of-the-art knowledge injection methods.
Keywords:
Natural Language Processing: Information Extraction
Natural Language Processing: Natural Language Processing
Natural Language Processing: Question Answering