Split Q Learning: Reinforcement Learning with Two-Stream Rewards

Split Q Learning: Reinforcement Learning with Two-Stream Rewards

Baihan Lin, Djallel Bouneffouf, Guillermo Cecchi

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Doctoral Consortium. Pages 6448-6449. https://doi.org/10.24963/ijcai.2019/913

Drawing an inspiration from behavioral studies of human decision making, we propose here a general parametric framework for a reinforcement learning problem, which extends the standard Q-learning approach to incorporate a two-stream framework of reward processing with biases biologically associated with several neurological and psychiatric conditions, including Parkinson's and Alzheimer's diseases, attention-deficit/hyperactivity disorder (ADHD), addiction, and chronic pain. For AI community, the development of agents that react differently to different types of rewards can enable us to understand a wide spectrum of multi-agent interactions in complex real-world socioeconomic systems. Moreover, from the behavioral modeling perspective, our parametric framework can be viewed as a first step towards a unifying computational model capturing reward processing abnormalities across multiple mental conditions and user preferences in long-term recommendation systems.
Keywords:
Humans and AI: Cognitive Modeling
Machine Learning: Reinforcement Learning
Humans and AI: Brain Sciences
Machine Learning Applications: Applications of Reinforcement Learning