SAR-to-Optical Image Translation via Neural Partial Differential Equations

SAR-to-Optical Image Translation via Neural Partial Differential Equations

Mingjin Zhang, Chengyu He, Jing Zhang, Yuxiang Yang, Xiaoqi Peng, Jie Guo

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1644-1650. https://doi.org/10.24963/ijcai.2022/229

Synthetic Aperture Radar (SAR) becomes prevailing in remote sensing while SAR images are challenging to interpret by human visual perception due to the active imaging mechanism and speckle noise. Recent researches on SAR-to-optical image translation provide a promising solution and have attracted increasing attentions, though still suffering from low optical image quality with geometric distortion due to the large domain gap. In this paper, we mitigate this issue from a novel perspective, i.e., neural partial differential equations (PDE). First, based on the efficient numerical scheme for solving PDE, i.e., Taylor Central Difference (TCD), we devise a basic TCD residual block to build the backbone network, which promotes the extraction of useful information in SAR images by aggregating and enhancing features from different levels. Furthermore, inspired by the Perona-Malik Diffusion (PMD), we devise a PMD neural module to implement feature diffusion through layers, aiming at removing the noises in smooth regions while preserving the geometric structures. Assembling them together, we propose a novel SAR-to-Optical image translation network named S2O-NPDE, which delivers optical images with finer structures and less noise while enjoying an explainability advantage from explicit mathematical derivation. Experiments on the popular SEN1-2 dataset show that our model outperforms state-of-the-art methods in terms of both objective metrics and visual quality.
Keywords:
Computer Vision: Applications
Computer Vision: Neural generative models, auto encoders, GANs