DERI: Cross-Modal ECG Representation Learning with Deep ECG-Report Interaction
DERI: Cross-Modal ECG Representation Learning with Deep ECG-Report Interaction
Jian Chen, Xiaoru Dong, Wei Wang, Shaorui Zhou, Lequan Yu, Xiping Hu
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 4824-4832.
https://doi.org/10.24963/ijcai.2025/537
Electrocardiogram (ECG) is widely used to diagnose cardiac conditions via deep learning methods. Although existing self-supervised learning (SSL) methods have achieved great performance in learning representation for ECG-based cardiac conditions classification, the clinical semantics can not be effectively captured. To overcome this limitation, we proposed to learn cross-modal ECG representations that contain more clinical semantics via a novel framework with \textbf{D}eep \textbf{E}CG-\textbf{R}eport \textbf{I}nteraction (\textbf{DERI}). Specifically, we design a novel framework combining multiple alignments and mutual feature reconstructions to learn effective representation of the ECG with the clinical report, which fuses the clinical semantics of the report. An RME-Module inspired by masked modeling is proposed to improve the ECG representation learning. Furthermore, we extend ECG representation learning to report generation with a language model, which is significant for evaluating clinical semantics in the learned representations and even clinical applications. Comprehensive experiments with various settings are conducted on various datasets to show the superior performance of our DERI. Our code is released on https://github.com/cccccj-03/DERI.
Keywords:
Machine Learning: ML: Self-supervised Learning
Data Mining: DM: Mining spatial and/or temporal data
Machine Learning: ML: Classification
Machine Learning: ML: Multi-modal learning
