Structure-Aware Handwritten Text Recognition via Graph-Enhanced Cross-Modal Mutual Learning
Structure-Aware Handwritten Text Recognition via Graph-Enhanced Cross-Modal Mutual Learning
Ji Gan, Yupeng Zhou, Yanming Zhang, Jiaxu Leng, Xinbo Gao
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5154-5162.
https://doi.org/10.24963/ijcai.2025/574
Existing handwriting recognition methods only focus on learning visual patterns by modeling low-level relationships of adjacent pixels, while overlooking the intrinsic geometric structures of characters. In this paper, we propose a novel graph-enhanced cross-modal mutual learning network GCM to fully process handwritten text images alongside their corresponding geometric graphs, which consists of one shared cross-modal encoder and two parallel inverse decoders. Specifically, the encoder simultaneously extracts visual and geometric information from the cross-modal inputs, and the decoders fuse the multi-modal features for prediction under the guidance of cross-modal fusion. Moreover, two parallel decoders sequentially aggregate cross-modal features in inverse orders (V→G and G→V) but are enhanced through mutual distillation at each time-step, which involves one-to-one knowledge transfer and fully leverages complementary cross-modal information from both directions. Notably, only one branch of GCM is activated in inference, thus avoiding the increase of the model parameters and computation costs for testing. Experiments show that our method outperforms previous state-of-the-art methods on public benchmarks such as IAM, RIMES, and ICDAR-2013 when no extra training data is utilized.
Keywords:
Machine Learning: ML: Classification
Computer Vision: CV: Recognition (object detection, categorization)
Machine Learning: ML: Attention models
Machine Learning: ML: Multi-modal learning
