SHAPE: An Unified Approach to Evaluate the Contribution and Cooperation of Individual Modalities

SHAPE: An Unified Approach to Evaluate the Contribution and Cooperation of Individual Modalities

Pengbo Hu, Xingyu Li, Yi Zhou

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3064-3070. https://doi.org/10.24963/ijcai.2022/425

As deep learning advances, there is an ever-growing demand for models capable of synthesizing information from multi-modal resources to address the complex tasks raised from real-life applications. Recently, many large multi-modal datasets have been collected, on which researchers actively explore different methods of fusing multi-modal information. However, little attention has been paid to quantifying the contribution of different modalities within the proposed models. In this paper, we propose the SHapley vAlue-based PErceptual (SHAPE) scores that measure the marginal contribution of individual modalities and the degree of cooperation across modalities. Using these scores, we systematically evaluate different fusion methods on different multi-modal datasets for different tasks. Our experiments suggest that for some tasks where different modalities are complementary, the multi-modal models still tend to use the dominant modality alone and ignore the cooperation across modalities. On the other hand, models learn to exploit cross-modal cooperation when different modalities are indispensable for the task. In this case, the scores indicate it is better to fuse different modalities at relatively early stages. We hope our scores can help improve the understanding of how the present multi-modal models operate on different modalities and encourage more sophisticated methods of integrating multiple modalities.
Keywords:
Machine Learning: Evaluation
Machine Learning: Multi-modal learning