MIXGAN: Learning Concepts from Different Domains for Mixture Generation

MIXGAN: Learning Concepts from Different Domains for Mixture Generation

Guang-Yuan Hao, Hong-Xing Yu, Wei-Shi Zheng

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2212-2219. https://doi.org/10.24963/ijcai.2018/306

In this work, we present an interesting attempt on mixture generation: absorbing different image concepts (e.g., content and style) from different domains and thus generating a new domain with learned concepts. In particular, we propose a mixture generative adversarial network (MIXGAN). MIXGAN learns concepts of content and style from two domains respectively, and thus can join them for mixture generation in a new domain, i.e., generating images with content from one domain and style from another. MIXGAN overcomes the limitation of current GAN-based models which either generate new images in the same domain as they observed in training stage, or require off-the-shelf content templates for transferring or translation. Extensive experimental results demonstrate the effectiveness of MIXGAN as compared to related state-of-the-art GAN-based models.
Keywords:
Machine Learning: Unsupervised Learning
Machine Learning: Deep Learning
Machine Learning: Learning Generative Models