Learning Multi-Objective Rewards and User Utility Function in Contextual Bandits for Personalized Ranking

Learning Multi-Objective Rewards and User Utility Function in Contextual Bandits for Personalized Ranking

Nirandika Wanigasekara, Yuxuan Liang, Siong Thye Goh, Ye Liu, Joseph Jay Williams, David S. Rosenblum

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 3835-3841. https://doi.org/10.24963/ijcai.2019/532

This paper tackles the problem of providing users with ranked lists of relevant search results, by incorporating contextual features of the users and search results, and learning how a user values multiple objectives. For example, to recommend a ranked list of hotels, an algorithm must learn which hotels are the right price for users, as well as how users vary in their weighting of price against the location. In our paper, we formulate the context-aware, multi-objective, ranking problem as a Multi-Objective Contextual Ranked Bandit (MOCR-B). To solve the MOCR-B problem, we present a novel algorithm, named Multi-Objective Utility-Upper Confidence Bound (MOU-UCB). The goal of MOU-UCB is to learn how to generate a ranked list of resources that maximizes the rewards in multiple objectives to give relevant search results. Our algorithm learns to predict rewards in multiple objectives based on contextual information (combining the Upper Confidence Bound algorithm for multi-armed contextual bandits with neural network embeddings), as well as learns how a user weights the multiple objectives. Our empirical results reveal that the ranked lists generated by MOU-UCB lead to better click-through rates, compared to approaches that do not learn the utility function over multiple reward objectives.
Keywords:
Machine Learning: Online Learning
Machine Learning Applications: Applications of Reinforcement Learning