Reward Adaptation via Q-Manipulation: Provably Beneficial Reward Function Transfer in Reinforcement Learning
Reward Adaptation via Q-Manipulation: Provably Beneficial Reward Function Transfer in Reinforcement Learning
Kevin Jatin Vora
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Doctoral Consortium. Pages 10979-10980.
https://doi.org/10.24963/ijcai.2025/1244
Reinforcement Learning has made great strides in game playing and robotics but faces challenges with sample complexity and generalization. Transfer learning, which allows agents to reuse knowledge from prior tasks, offers a promising solution. My current research focuses on Reward Adaptation, where agents adjust to new reward functions while leveraging knowledge from tasks with different reward functions. I propose Q-Manipulation (Q-M), a method that adapts Q-functions to new rewards by computing and iteratively tightening bounds, akin to value iteration. This allows for action pruning before learning begins, enhancing sample efficiency without compromising policy optimality. Through empirical comparisons I demonstrate its effectiveness, generalizability, and practicality. Future work will handle changes in transition dynamics and continuous MDPs.
Keywords:
Machine Learning: ML: Reinforcement learning
Machine Learning: ML: Multi-task and transfer learning
