Learning to Draw Text in Natural Images with Conditional Adversarial Networks

Learning to Draw Text in Natural Images with Conditional Adversarial Networks

Shancheng Fang, Hongtao Xie, Jianjun Chen, Jianlong Tan, Yongdong Zhang

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 715-722. https://doi.org/10.24963/ijcai.2019/101

In this work, we propose an entirely learning-based method to automatically synthesize text sequence in natural images leveraging conditional adversarial networks. As vanilla GANs are clumsy to capture structural text patterns, directly employing GANs for text image synthesis typically results in illegible images. Therefore, we design a two-stage architecture to generate repeated characters in images. Firstly, a character generator attempts to synthesize local character appearance independently, so that the legible characters in sequence can be obtained. To achieve style consistency of characters, we propose a novel style loss based on variance-minimization. Secondly, we design a pixel-manipulation word generator constrained by self-regularization, which learns to convert local characters to plausible word image. Experiments on SVHN dataset and ICDAR, IIIT5K datasets demonstrate our method is able to synthesize visually appealing text images. Besides, we also show the high-quality images synthesized by our method can be used to boost the performance of a scene text recognition algorithm.
Keywords:
Computer Vision: Computer Vision
Computer Vision: Statistical Methods and Machine Learning