A Rigorous Risk-aware Linear Approach to Extended Markov Ratio Decision Processes with Embedded Learning

A Rigorous Risk-aware Linear Approach to Extended Markov Ratio Decision Processes with Embedded Learning

Alexander Zadorojniy, Takayuki Osogami, Orit Davidovich

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 5475-5483. https://doi.org/10.24963/ijcai.2023/608

We consider the problem of risk-aware Markov Decision Processes (MDPs) for Safe AI. We introduce a theoretical framework, Extended Markov Ratio Decision Processes (EMRDP), that incorporates risk into MDPs and embeds environment learning into this framework. We propose an algorithm to find the optimal policy for EMRDP with theoretical guarantees. Under a certain monotonicity assumption, this algorithm runs in strongly-polynomial time both in the discounted and expected average reward models. We validate our algorithm empirically on a Grid World benchmark, evaluating its solution quality, required number of steps, and numerical stability. We find its solution quality to be stable under data noising, while its required number of steps grows with added noise. We observe its numerical stability compared to global methods.
Keywords:
Planning and Scheduling: PS: Markov decisions processes
Machine Learning: ML: Reinforcement learning
Uncertainty in AI: UAI: Sequential decision making