Cascading Non-Stationary Bandits: Online Learning to Rank in the Non-Stationary Cascade Model

Cascading Non-Stationary Bandits: Online Learning to Rank in the Non-Stationary Cascade Model

Chang Li, Maarten de Rijke

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 2859-2865. https://doi.org/10.24963/ijcai.2019/396

Non-stationarity appears in many online applications such as web search and advertising. In this paper, we study the online learning to rank problem in a non-stationary environment where user preferences change abruptly at an unknown moment in time. We consider the problem of identifying the K most attractive items and propose cascading non-stationary bandits, an online learning variant of the cascading model, where a user browses a ranked list from top to bottom and clicks on the first attractive item. We propose two algorithms for solving this non-stationary problem: CascadeDUCB and CascadeSWUCB. We analyze their performance and derive gap-dependent upper bounds on the n-step regret of these algorithms. We also establish a lower bound on the regret for cascading non-stationary bandits and show that both algorithms match the lower bound up to a logarithmic factor. Finally, we evaluate their performance on a real-world web search click dataset.
Keywords:
Machine Learning: Learning Preferences or Rankings
Machine Learning: Online Learning
Machine Learning: Recommender Systems
Multidisciplinary Topics and Applications: Information Retrieval