Multi-Modality Deep Network for JPEG Artifacts Reduction

Multi-Modality Deep Network for JPEG Artifacts Reduction

Xuhao Jiang, Weimin Tan, Qing Lin, Chenxi Ma, Bo Yan, Liquan Shen

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 3857-3865. https://doi.org/10.24963/ijcai.2023/429

In recent years, many convolutional neural network-based models are designed for JPEG artifacts reduction, and have achieved notable progress. However, few methods are suitable for extreme low-bitrate image compression artifacts reduction. The main challenge is that the highly compressed image loses too much information, resulting in reconstructing high-quality image difficultly. To address this issue, we propose a multimodal fusion learning method for text-guided JPEG artifacts reduction, in which the corresponding text description not only provides the potential prior information of the highly compressed image, but also serves as supplementary information to assist in image deblocking. We fuse image features and text semantic features from the global and local perspectives respectively, and design a contrastive loss built upon contrastive learning to produce visually pleasing results. Extensive experiments, including a user study, prove that our method can obtain better deblocking results compared to the state-of-the-art methods.
Keywords:
Machine Learning: ML: Multi-modal learning
Computer Vision: CV: Machine learning for vision