CAGAN: Consistent Adversarial Training Enhanced GANs

CAGAN: Consistent Adversarial Training Enhanced GANs

Yao Ni, Dandan Song, Xi Zhang, Hao Wu, Lejian Liao

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2588-2594. https://doi.org/10.24963/ijcai.2018/359

Generative adversarial networks (GANs) have shown impressive results, however, the generator and the discriminator are optimized in finite parameter space which means their performance still need to be improved. In this paper, we propose a novel approach of adversarial training between one generator and an exponential number of critics which are sampled from the original discriminative neural network via dropout. As discrepancy between outputs of different sub-networks of a same sample can measure the consistency of these critics, we encourage the critics to be consistent to real samples and inconsistent to generated samples during training, while the generator is trained to generate consistent samples for different critics. Experimental results demonstrate that our method can obtain state-of-the-art Inception scores of 9.17 and 10.02 on supervised CIFAR-10 and unsupervised STL-10 image generation tasks, respectively, as well as achieve competitive semi-supervised classification results on several benchmarks. Importantly, we demonstrate that our method can maintain stability in training and alleviate mode collapse.
Keywords:
Machine Learning: Neural Networks
Machine Learning: Semi-Supervised Learning
Machine Learning: Unsupervised Learning
Machine Learning: Deep Learning
Machine Learning: Learning Generative Models