Causal Deep Reinforcement Learning Using Observational Data

Causal Deep Reinforcement Learning Using Observational Data

Wenxuan Zhu, Chao Yu, Qiang Zhang

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 4711-4719. https://doi.org/10.24963/ijcai.2023/524

Deep reinforcement learning (DRL) requires the collection of interventional data, which is sometimes expensive and even unethical in the real world, such as in the autonomous driving and the medical field. Offline reinforcement learning promises to alleviate this issue by exploiting the vast amount of observational data available in the real world. However, observational data may mislead the learning agent to undesirable outcomes if the behavior policy that generates the data depends on unobserved random variables (i.e., confounders). In this paper, we propose two deconfounding methods in DRL to address this problem. The methods first calculate the importance degree of different samples based on the causal inference technique, and then adjust the impact of different samples on the loss function by reweighting or resampling the offline dataset to ensure its unbiasedness. These deconfounding methods can be flexibly combined with existing model-free DRL algorithms such as soft actor-critic and deep Q-learning, provided that a weak condition can be satisfied by the loss functions of these algorithms. We prove the effectiveness of our deconfounding methods and validate them experimentally.
Keywords:
Machine Learning: ML: Deep reinforcement learning
Machine Learning: ML: Causality
Uncertainty in AI: UAI: Causality, structural causal models and causal inference