Bernoulli Rank-1 Bandits for Click Feedback

Bernoulli Rank-1 Bandits for Click Feedback

Sumeet Katariya, Branislav Kveton, Csaba Szepesvári, Claire Vernade, Zheng Wen

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 2001-2007. https://doi.org/10.24963/ijcai.2017/278

The probability that a user will click a search result depends both on its relevance and its position on the results page. The position based model explains this behavior by ascribing to every item an attraction probability, and to every position an examination probability. To be clicked, a result must be both attractive and examined. The probabilities of an item-position pair being clicked thus form the entries of a rank-1 matrix. We propose the learning problem of a Bernoulli rank-1 bandit where at each step, the learning agent chooses a pair of row and column arms, and receives the product of their Bernoulli-distributed values as a reward. This is a special case of the stochastic rank-1 bandit problem considered in recent work that proposed an elimination based algorithm Rank1Elim, and showed that Rank1Elim's regret scales linearly with the number of rows and columns on "benign" instances. These are the instances where the minimum of the average row and column rewards mu is bounded away from zero. The issue with Rank1Elim is that it fails to be competitive with straightforward bandit strategies as mu tends to 0. In this paper we propose Rank1ElimKL, which replaces the crude confidence intervals of Rank1Elim with confidence intervals based on Kullback-Leibler (KL) divergences. With the help of a novel result concerning the scaling of KL divergences we prove that with this change, our algorithm will be competitive no matter the value of mu. Experiments with synthetic data confirm that on benign instances the performance of Rank1ElimKL is significantly better than that of even Rank1Elim. Similarly, experiments with models derived from real-data confirm that the improvements are significant across the board, regardless of whether the data is benign or not.
Keywords:
Machine Learning: Learning Preferences or Rankings
Machine Learning: Online Learning
Multidisciplinary Topics and Applications: Personalization and User Modeling
Machine Learning: Structured Learning