Towards Unsupervised Deformable-Instances Image-to-Image Translation

Towards Unsupervised Deformable-Instances Image-to-Image Translation

Sitong Su, Jingkuan Song, Lianli Gao, Junchen Zhu

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 1004-1010. https://doi.org/10.24963/ijcai.2021/139

Replacing objects in images is a practical functionality of Photoshop, e.g., clothes changing. This task is defined as Unsupervised Deformable-Instances Image-to-Image Translation (UDIT), which maps multiple foreground instances of a source domain to a target domain, involving significant changes in shape. In this paper, we propose an effective pipeline named Mask-Guided Deformable-instances GAN (MGD-GAN) which first generates target masks in batch and then utilizes them to synthesize corresponding instances on the background image, with all instances efficiently translated and background well preserved. To promote the quality of synthesized images and stabilize the training, we design an elegant training procedure which transforms the unsupervised mask-to-instance process into a supervised way by creating paired examples. To objectively evaluate the performance of UDIT task, we design new evaluation metrics which are based on the object detection. Extensive experiments on four datasets demonstrate the significant advantages of our MGD-GAN over existing methods both quantitatively and qualitatively. Furthermore, our training time consumption is hugely reduced compared to the state-of-the-art. The code could be available at https://github.com/sitongsu/MGD_GAN.
Keywords:
Computer Vision: 2D and 3D Computer Vision
Computer Vision: Computational Photography, Photometry, Shape from X