A Case for Validation Buffer in Pessimistic Actor-Critic
A Case for Validation Buffer in Pessimistic Actor-Critic
Michał Nauman, Mateusz Ostaszewski, Marek Cygan
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Main Track. Pages 5976-5984.
https://doi.org/10.24963/ijcai.2025/665
In this paper, we investigate the issue of error accumulation in critic networks updated via pessimistic temporal difference objectives. We show that the critic approximation error can be approximated via a recursive fixed-point model similar to that of the Bellman value. We use such recursive definition to retrieve the conditions under which the pessimistic critic is unbiased. Building on these insights, we propose Validation Pessimism Learning (VPL) algorithm. VPL uses a small validation buffer to adjust the levels of pessimism throughout the agent training, with the pessimism set such that the approximation error of the critic targets is minimized. We investigate the proposed approach on a variety of locomotion and manipulation tasks and report improvements in sample efficiency and performance.
Keywords:
Machine Learning: ML: Reinforcement learning
Planning and Scheduling: PS: Markov decisions processes
Robotics: ROB: Behavior and control
Robotics: ROB: Learning in robotics
