Rethinking InfoNCE: How Many Negative Samples Do You Need?

Rethinking InfoNCE: How Many Negative Samples Do You Need?

Chuhan Wu, Fangzhao Wu, Yongfeng Huang

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 2509-2515. https://doi.org/10.24963/ijcai.2022/348

InfoNCE is a widely used contrastive training loss. It aims to estimate the mutual information between a pair of variables by discriminating between each positive pair and its associated K negative pairs. It is proved that when the sample labels are clean, the lower bound of mutual information estimation is tighter when more negative samples are incorporated, which usually yields better model performance. However, in practice the labels often contain noise, and incorporating too many noisy negative samples into model training may be suboptimal. In this paper, we study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework. More specifically, we first propose a probabilistic model to analyze the influence of the negative sampling ratio K on training sample informativeness. Then, we design a training effectiveness function to measure the overall influence of training samples based on their informativeness. We estimate the optimal negative sampling ratio using the K value that maximizes the training effectiveness function. Based on our framework, we further propose an adaptive negative sampling method that can dynamically adjust the negative sampling ratio to improve InfoNCE-based model training. Extensive experiments in three different tasks show our framework can accurately predict the optimal negative sampling ratio, and various models can benefit from our adaptive negative sampling method.
Keywords:
Humans and AI: Personalization and User Modeling
Data Mining: Information Retrieval