Arbitrary Talking Face Generation via Attentional Audio-Visual Coherence Learning

Arbitrary Talking Face Generation via Attentional Audio-Visual Coherence Learning

Hao Zhu, Huaibo Huang, Yi Li, Aihua Zheng, Ran He

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 2362-2368. https://doi.org/10.24963/ijcai.2020/327

Talking face generation aims to synthesize a face video with precise lip synchronization as well as a smooth transition of facial motion over the entire video via the given speech clip and facial image. Most existing methods mainly focus on either disentangling the information in a single image or learning temporal information between frames. However, cross-modality coherence between audio and video information has not been well addressed during synthesis. In this paper, we propose a novel arbitrary talking face generation framework by discovering the audio-visual coherence via the proposed Asymmetric Mutual Information Estimator (AMIE). In addition, we propose a Dynamic Attention (DA) block by selectively focusing the lip area of the input image during the training stage, to further enhance lip synchronization. Experimental results on benchmark LRW dataset and GRID dataset transcend the state-of-the-art methods on prevalent metrics with robust high-resolution synthesizing on gender and pose variations.
Keywords:
Machine Learning: Deep Generative Models
Humans and AI: Human-Computer Interaction