Temporal Adaptive Alignment Network for Deep Video Inpainting
Temporal Adaptive Alignment Network for Deep Video Inpainting
Ruixin Liu, Zhenyu Weng, Yuesheng Zhu, Bairong Li
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 927-933.
https://doi.org/10.24963/ijcai.2020/129
Video inpainting aims to synthesize visually pleasant and temporally consistent content in missing regions of video. Due to a variety of motions across different frames, it is highly challenging to utilize effective temporal information to recover videos. Existing deep learning based methods usually estimate optical flow to align frames and thereby exploit useful information between frames. However, these methods tend to generate artifacts once the estimated optical flow is inaccurate. To alleviate above problem, we propose a novel end-to-end Temporal Adaptive Alignment Network(TAAN) for video inpainting. The TAAN aligns reference frames with target frame via implicit motion estimation at a feature level and then reconstruct target frame by taking the aggregated aligned reference frame features as input. In the proposed network, a Temporal Adaptive Alignment (TAA) module based on deformable convolutions is designed to perform temporal alignment in a local, dense and adaptive manner. Both quantitative and qualitative evaluation results show that our method significantly outperforms existing deep learning based methods.
Keywords:
Computer Vision: 2D and 3D Computer Vision
Machine Learning: Deep Learning: Convolutional networks
Machine Learning Applications: Applications of Unsupervised Learning