Multi-Feedback Bandit Learning with Probabilistic Contexts
Multi-Feedback Bandit Learning with Probabilistic Contexts
Luting Yang, Jianyi Yang, Shaolei Ren
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 3087-3093.
https://doi.org/10.24963/ijcai.2020/427
Contextual bandit is a classic multi-armed bandit setting, where side information (i.e., context) is available before arm selection. A standard assumption is that exact contexts are perfectly known prior to arm selection and only single feedback is returned. In this work, we focus on multi-feedback bandit learning with probabilistic contexts, where a bundle of contexts are revealed to the agent along with their corresponding probabilities at the beginning of each round. This models such scenarios as where contexts are drawn from the probability output of a neural network and the reward function is jointly determined by multiple feedback signals. We propose a kernelized learning algorithm based on upper confidence bound to choose the optimal arm in reproducing kernel Hilbert space for each context bundle. Moreover, we theoretically establish an upper bound on the cumulative regret with respect to an oracle that knows the optimal arm given probabilistic contexts, and show that the bound grows sublinearly with time. Our simula- tion on machine learning model recommendation further validates the sub-linearity of our cumulative regret and demonstrates that our algorithm outper- forms the approach that selects arms based on the most probable context.
Keywords:
Machine Learning: Online Learning
Machine Learning: Recommender Systems