Towards Robust Scene Text Image Super-resolution via Explicit Location Enhancement

Towards Robust Scene Text Image Super-resolution via Explicit Location Enhancement

Hang Guo, Tao Dai, Guanghao Meng, Shu-Tao Xia

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 782-790. https://doi.org/10.24963/ijcai.2023/87

Scene text image super-resolution (STISR), aiming to improve image quality while boosting downstream scene text recognition accuracy, has recently achieved great success. However, most existing methods treat the foreground (character regions) and background (non-character regions) equally in the forward process, and neglect the disturbance from the complex background, thus limiting the performance. To address these issues, in this paper, we propose a novel method LEMMA that explicitly models character regions to produce high-level text-specific guidance for super-resolution. To model the location of characters effectively, we propose the location enhancement module to extract character region features based on the attention map sequence. Besides, we propose the multi-modal alignment module to perform bidirectional visual-semantic alignment to generate high-quality prior guidance, which is then incorporated into the super-resolution branch in an adaptive manner using the proposed adaptive fusion module. Experiments on TextZoom and four scene text recognition benchmarks demonstrate the superiority of our method over other state-of-the-art methods. Code is available at https://github.com/csguoh/LEMMA.
Keywords:
Computer Vision: CV: Recognition (object detection, categorization)
Computer Vision: CV: Applications
Computer Vision: CV: Machine learning for vision