Controllable Text Generation for Open-Domain Creativity and Fairness

Controllable Text Generation for Open-Domain Creativity and Fairness

Nanyun (Violet) Peng

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Early Career. Pages 5821-5825. https://doi.org/10.24963/ijcai.2022/818

Recent advances in large pre-trained language models have demonstrated strong results in generating natural languages and significantly improved performances for many natural language generation (NLG) applications such as machine translation and text summarization. However, when the generation tasks are more open-ended and the content is under-specified, existing techniques struggle to generate long-term coherent and creative content. Moreover, the models exhibit and even amplify social biases that are learned from the training corpora. This happens because the generation models are trained to capture the surface patterns (i.e. sequences of words), instead of capturing underlying semantics and discourse structures, as well as background knowledge including social norms. In this paper, I introduce our recent works on controllable text generation to enhance the creativity and fairness of language generation models. We explore hierarchical generation and constrained decoding, with applications to creative language generation including story, poetry, and figurative languages, and bias mitigation for generation models.
Keywords:
EC: Natural Language Generation