Unsupervised Multi-Modal Medical Image Registration via Discriminator-Free Image-to-Image Translation

Unsupervised Multi-Modal Medical Image Registration via Discriminator-Free Image-to-Image Translation

Zekang Chen, Jia Wei, Rui Li

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 834-840. https://doi.org/10.24963/ijcai.2022/117

In clinical practice, well-aligned multi-modal images, such as Magnetic Resonance (MR) and Computed Tomography (CT), together can provide complementary information for image-guided therapies. Multi-modal image registration is essential for the accurate alignment of these multi-modal images. However, it remains a very challenging task due to complicated and unknown spatial correspondence between different modalities. In this paper, we propose a novel translation-based unsupervised deformable image registration approach to convert the multi-modal registration problem to a mono-modal one. Specifically, our approach incorporates a discriminator-free translation network to facilitate the training of the registration network and a patchwise contrastive loss to encourage the translation network to preserve object shapes. Furthermore, we propose to replace an adversarial loss, that is widely used in previous multi-modal image registration methods, with a pixel loss in order to integrate the output of translation into the target modality. This leads to an unsupervised method requiring no ground-truth deformation or pairs of aligned images for training. We evaluate four variants of our approach on the public Learn2Reg 2021 datasets. The experimental results demonstrate that the proposed architecture achieves state-of-the-art performance. Our code is available at https://github.com/heyblackC/DFMIR.
Keywords:
Computer Vision: Biomedical Image Analysis
Machine Learning: Multi-modal learning
Machine Learning: Unsupervised Learning