MSCI: Addressing CLIP's Inherent Limitations for Compositional Zero-Shot Learning
MSCI: Addressing CLIP's Inherent Limitations for Compositional Zero-Shot Learning
Yue Wang, Shuai Xu, Xuelin Zhu, Yicong Li
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 2009-2017.
https://doi.org/10.24963/ijcai.2025/224
Compositional Zero-Shot Learning (CZSL) aims to recognize unseen state-object combinations by leveraging known combinations. Existing studies basically rely on the cross-modal alignment capabilities of CLIP but tend to overlook its limitations in capturing fine-grained local features, which arise from its architectural and training paradigm. To address this issue, we propose a Multi-Stage Cross-modal Interaction (MSCI) model that effectively explores and utilizes intermediate-layer information from CLIP's visual encoder. Specifically, we design two self-adaptive aggregators to extract local information from low-level visual features and integrate global information from high-level visual features, respectively. These key information are progressively incorporated into textual representations through a stage-by-stage interaction mechanism, significantly enhancing the model’s perception capability for fine-grained local visual information. Additionally, MSCI dynamically adjusts the attention weights between global and local visual information based on different combinations, as well as different elements within the same combination, allowing it to flexibly adapt to diverse scenarios. Experiments on three widely used datasets fully validate the effectiveness and superiority of the proposed model. Data and code are available at https://github.com/ltpwy/MSCI.
Keywords:
Computer Vision: CV: Transfer, low-shot, semi- and un- supervised learning
Computer Vision: CV: Structural and model-based approaches, knowledge representation and reasoning
Machine Learning: ML: Classification
Machine Learning: ML: Deep learning architectures
