Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness

Regularizing Variational Autoencoder with Diversity and Uncertainty Awareness

Dazhong Shen, Chuan Qin, Chao Wang, Hengshu Zhu, Enhong Chen, Hui Xiong

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 2964-2970. https://doi.org/10.24963/ijcai.2021/408

As one of the most popular generative models, Variational Autoencoder (VAE) approximates the posterior of latent variables based on amortized variational inference. However, when the decoder network is sufficiently expressive, VAE may lead to posterior collapse; that is, uninformative latent representations may be learned. To this end, in this paper, we propose an alternative model, DU-VAE, for learning a more Diverse and less Uncertain latent space, and thus the representation can be learned in a meaningful and compact manner. Specifically, we first theoretically demonstrate that it will result in better latent space with high diversity and low uncertainty awareness by controlling the distribution of posterior’s parameters across the whole data accordingly. Then, without the introduction of new loss terms or modifying training strategies, we propose to exploit Dropout on the variances and Batch-Normalization on the means simultaneously to regularize their distributions implicitly. Furthermore, to evaluate the generalization effect, we also exploit DU-VAE for inverse autoregressive flow based-VAE (VAE-IAF) empirically. Finally, extensive experiments on three benchmark datasets clearly show that our approach can outperform state-of-the-art baselines on both likelihood estimation and underlying classification tasks.
Keywords:
Machine Learning: Bayesian Learning
Machine Learning: Probabilistic Machine Learning
Machine Learning: Unsupervised Learning