Active Contrastive Set Mining for Robust Audio-Visual Instance Discrimination

Active Contrastive Set Mining for Robust Audio-Visual Instance Discrimination

Hanyu Xuan, Yihong Xu, Shuo Chen, Zhiliang Wu, Jian Yang, Yan Yan, Xavier Alameda-Pineda

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3643-3649. https://doi.org/10.24963/ijcai.2022/506

The recent success of audio-visual representation learning can be largely attributed to their pervasive property of audio-visual synchronization, which can be used as self-annotated supervision. As a state-of-the-art solution, Audio-Visual Instance Discrimination (AVID) extends instance discrimination to the audio-visual realm. Existing AVID methods construct the contrastive set by random sampling based on the assumption that the audio and visual clips from all other videos are not semantically related. We argue that this assumption is rough, since the resulting contrastive sets have a large number of faulty negatives. In this paper, we overcome this limitation by proposing a novel Active Contrastive Set Mining (ACSM) that aims to mine the contrastive sets with informative and diverse negatives for robust AVID. Moreover, we also integrate a semantically-aware hard-sample mining strategy into our ACSM. The proposed ACSM is implemented into two most recent state-of-the-art AVID methods and significantly improves their performance. Extensive experiments conducted on both action and sound recognition on multiple datasets show the remarkably improved performance of our method.
Keywords:
Machine Learning: Multi-modal learning
Machine Learning: Representation learning
Machine Learning: Self-supervised Learning