DIIN: Diffusion Iterative Implicit Networks for Arbitrary-scale Super-resolution
DIIN: Diffusion Iterative Implicit Networks for Arbitrary-scale Super-resolution
Tao Dai, Song Wang, Hang Guo, Jianping Wang, Zexuan Zhu
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 855-863.
https://doi.org/10.24963/ijcai.2025/96
Implicit neural representation (INR) aims to represent continuous domain signals via implicit neural functions and has achieved great success in arbitrary-scale image super-resolution (SR). However, most existing INR-based SR methods focus on learning implicit features from independent coordinate, while neglecting interactions of neighborhood coordinates, thus resulting in limited contextual awareness. In this paper, we rethink the forward process of implicit neural functions as a signal diffusion process, we propose a novel Diffusion Iterative Implicit Network (DIIN) for arbitrary-scale SR to promote global signal flow with neighborhood interactions. The DIIN framework mainly consists of stacked Diffusion Iteration Layers with dictionary cross-attention block to enrich the iterative update process with supplementary information. Besides, we develop the Position-Aware Embedding Block to strengthen spatial dependencies between consecutive input samples.Extensive experiments on public datasets demonstrate that our method achieves state-of-the-art or competitive performance, highlighting its effectiveness and efficiency for arbitrary-scale SR. Our code is available at https://github.com/Song-1205/DIIN.
Keywords:
Computer Vision: CV: Low-level Vision
Machine Learning: ML: Attention models
