I2CNet: An Intra- and Inter-Class Context Information Fusion Network for Blastocyst Segmentation

I2CNet: An Intra- and Inter-Class Context Information Fusion Network for Blastocyst Segmentation

Hua Wang, Linwei Qiu, Jingfei Hu, Jicong Zhang

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 1415-1422. https://doi.org/10.24963/ijcai.2022/197

The quality of a blastocyst directly determines the embryo's implantation potential, thus making it essential to objectively and accurately identify the blastocyst morphology. In this work, we propose an automatic framework named I2CNet to perform the blastocyst segmentation task in human embryo images. The I2CNet contains two components: IntrA-Class Context Module (IACCM) and InteR-Class Context Module (IRCCM). The IACCM aggregates the representations of specific areas sharing the same category for each pixel, where the categorized regions are learned under the supervision of the groundtruth. This aggregation decomposes a K-category recognition task into K recognition tasks of two labels while maintaining the ability of garnering intra-class features. In addition, the IRCCM is designed based on the blastocyst morphology to compensate for inter-class information which is gradually gathered from inside out. Meanwhile, a weighted mapping function is applied to facilitate edges of the inter classes and stimulate some hard samples. Eventually, the learned intra- and inter-class cues are integrated from coarse to fine, rendering sufficient information interaction and fusion between multi-scale features. Quantitative and qualitative experiments demonstrate that the superiority of our model compared with other representative methods. The I2CNet achieves accuracy of 94.14% and Jaccard of 85.25% on blastocyst public dataset.
Keywords:
Computer Vision: Biomedical Image Analysis
Computer Vision: Segmentation
Machine Learning: Convolutional Networks