Regularisation for Efficient Softmax Parameter Generation in Low-Resource Text Classifiers

Regularisation for Efficient Softmax Parameter Generation in Low-Resource Text Classifiers

Daniel Grießhaber, Johannes Maucher, Ngoc Thang Vu

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 5058-5066. https://doi.org/10.24963/ijcai.2023/562

Meta-learning has made tremendous progress in recent years and was demonstrated to be particularly suitable in low-resource settings where training data is very limited. However, meta-learning models still require large amounts of training tasks to achieve good generalisation. Since labelled training data may be sparse, self-supervision-based approaches are able to further improve performance on downstream tasks. Although no labelled data is necessary for this training, a large corpus of unlabelled text needs to be available. In this paper, we improve on recent advances in meta-learning for natural language models that allow training on a diverse set of training tasks for few-shot, low-resource target tasks. We introduce a way to generate new training data with the need for neither more supervised nor unsupervised datasets. We evaluate the method on a diverse set of NLP tasks and show that the model decreases in performance when trained on this data without further adjustments. Therefore, we introduce and evaluate two methods for regularising the training process and show that they not only improve performance when used in conjunction with the new training data but also improve average performance when training only on the original data, compared to the baseline.
Keywords:
Natural Language Processing: NLP: Text classification
Machine Learning: ML: Few-shot learning
Machine Learning: ML: Meta-learning