Strip Attention for Image Restoration

Strip Attention for Image Restoration

Yuning Cui, Yi Tao, Luoxi Jing, Alois Knoll

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 645-653. https://doi.org/10.24963/ijcai.2023/72

As a long-standing task, image restoration aims to recover the latent sharp image from its degraded counterpart. In recent years, owing to the strong ability of self-attention in capturing long-range dependencies, Transformer based methods have achieved promising performance on multifarious image restoration tasks. However, the canonical self-attention leads to quadratic complexity with respect to input size, hindering its further applications in image restoration. In this paper, we propose a Strip Attention Network (SANet) for image restoration to integrate information in a more efficient and effective manner. Specifically, a strip attention unit is proposed to harvest the contextual information for each pixel from its adjacent pixels in the same row or column. By employing this operation in different directions, each location can perceive information from an expanded region. Furthermore, we apply various receptive fields in different feature groups to enhance representation learning. Incorporating these designs into a U-shaped backbone, our SANet performs favorably against state-of-the-art algorithms on several image restoration tasks. The code is available at https://github.com/c-yn/SANet.
Keywords:
Computer Vision: CV: Other
Computer Vision: CV: Machine learning for vision
Computer Vision: CV: Representation learning