Solving Continual Combinatorial Selection via Deep Reinforcement Learning

Solving Continual Combinatorial Selection via Deep Reinforcement Learning

Hyungseok Song, Hyeryung Jang, Hai H. Tran, Se-eun Yoon, Kyunghwan Son, Donggyu Yun, Hyoju Chung, Yung Yi

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 3467-3474. https://doi.org/10.24963/ijcai.2019/481

We consider the Markov Decision Process (MDP) of selecting a subset of items at each step, termed the Select-MDP (S-MDP). The large state and action spaces of S-MDPs make them intractable to solve with typical reinforcement learning (RL) algorithms especially when the number of items is huge. In this paper, we present a deep RL algorithm to solve this issue by adopting the following key ideas. First, we convert the original S-MDP into an Iterative Select-MDP (IS-MDP), which is equivalent to the S-MDP in terms of optimal actions. IS-MDP decomposes a joint action of selecting K items simultaneously into K iterative selections resulting in the decrease of actions at the expense of an exponential increase of states. Second, we overcome this state space explosion by exploiting a special symmetry in IS-MDPs with novel weight shared Q-networks, which provably maintain sufficient expressive power. Various experiments demonstrate that our approach works well even when the item space is large and that it scales to environments with item spaces different from those used in training.
Keywords:
Machine Learning: Reinforcement Learning
Heuristic Search and Game Playing: Combinatorial Search and Optimisation
Machine Learning: Recommender Systems
Uncertainty in AI: Markov Decision Processes