IRC-GAN: Introspective Recurrent Convolutional GAN for Text-to-video Generation

IRC-GAN: Introspective Recurrent Convolutional GAN for Text-to-video Generation

Kangle Deng, Tianyi Fei, Xin Huang, Yuxin Peng

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2216-2222. https://doi.org/10.24963/ijcai.2019/307

Automatically generating videos according to the given text is a highly challenging task, where visual quality and semantic consistency with captions are two critical issues. In existing methods, when generating a specific frame, the information in those frames generated before is not fully exploited. And an effective way to measure the semantic accordance between videos and captions remains to be established. To address these issues, we present a novel Introspective Recurrent Convolutional GAN (IRC-GAN) approach. First, we propose a recurrent transconvolutional generator, where LSTM cells are integrated with 2D transconvolutional layers. As 2D transconvolutional layers put more emphasis on the details of each frame than 3D ones, our generator takes both the definition of each video frame and temporal coherence across the whole video into consideration, and thus can generate videos with better visual quality. Second, we propose mutual information introspection to semantically align the generated videos to text. Unlike other methods simply judging whether the video and the text match or not, we further take mutual information to concretely measure the semantic consistency. In this way,  our model is able to introspect the semantic distance between the generated video and the corresponding text, and try to minimize it to boost the semantic consistency.We conduct experiments on 3 datasets and compare with state-of-the-art methods. Experimental results demonstrate the effectiveness of our IRC-GAN to generate plausible videos from given text.
Keywords:
Machine Learning: Learning Generative Models
Computer Vision: Language and Vision