CROP: Towards Distributional-Shift Robust Reinforcement Learning Using Compact Reshaped Observation Processing

CROP: Towards Distributional-Shift Robust Reinforcement Learning Using Compact Reshaped Observation Processing

Philipp Altmann, Fabian Ritz, Leonard Feuchtinger, Jonas Nüßlein, Claudia Linnhoff-Popien, Thomy Phan

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 3414-3422. https://doi.org/10.24963/ijcai.2023/380

The safe application of reinforcement learning (RL) requires generalization from limited training data to unseen scenarios. Yet, fulfilling tasks under changing circumstances is a key challenge in RL. Current state-of-the-art approaches for generalization apply data augmentation techniques to increase the diversity of training data. Even though this prevents overfitting to the training environment(s), it hinders policy optimization. Crafting a suitable observation, only containing crucial information, has been shown to be a challenging task itself. To improve data efficiency and generalization capabilities, we propose Compact Reshaped Observation Processing (CROP) to reduce the state information used for policy optimization. By providing only relevant information, overfitting to a specific training layout is precluded and generalization to unseen environments is improved. We formulate three CROPs that can be applied to fully observable observation- and action-spaces and provide methodical foundation. We empirically show the improvements of CROP in a distributionally shifted safety gridworld. We furthermore provide benchmark comparisons to full observability and data-augmentation in two different-sized procedurally generated mazes.
Keywords:
Machine Learning: ML: Deep reinforcement learning
AI Ethics, Trust, Fairness: ETF: Safety and robustness
Machine Learning: ML: Robustness