Conditional GAN with Discriminative Filter Generation for Text-to-Video Synthesis

Conditional GAN with Discriminative Filter Generation for Text-to-Video Synthesis

Yogesh Balaji, Martin Renqiang Min, Bing Bai, Rama Chellappa, Hans Peter Graf

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 1995-2001. https://doi.org/10.24963/ijcai.2019/276

Developing conditional generative models for text-to-video synthesis is an extremely challenging yet an important topic of research in machine learning. In this work, we address this problem by introducing Text-Filter conditioning Generative Adversarial Network (TFGAN), a conditional GAN model with a novel multi-scale text-conditioning scheme that improves text-video associations. By combining the proposed conditioning scheme with a deep GAN architecture, TFGAN generates high quality videos from text on challenging real-world video datasets. In addition, we construct a synthetic dataset of text-conditioned moving shapes to systematically evaluate our conditioning scheme. Extensive experiments demonstrate that TFGAN significantly outperforms existing approaches, and can also generate videos of novel categories not seen during training.
Keywords:
Machine Learning: Learning Generative Models
Machine Learning: Deep Learning