Lexicographic Multi-Objective Reinforcement Learning

Lexicographic Multi-Objective Reinforcement Learning

Joar Skalse, Lewis Hammond, Charlie Griffin, Alessandro Abate

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 3430-3436. https://doi.org/10.24963/ijcai.2022/476

In this work we introduce reinforcement learning techniques for solving lexicographic multi-objective problems. These are problems that involve multiple reward signals, and where the goal is to learn a policy that maximises the first reward signal, and subject to this constraint also maximises the second reward signal, and so on. We present a family of both action-value and policy gradient algorithms that can be used to solve such problems, and prove that they converge to policies that are lexicographically optimal. We evaluate the scalability and performance of these algorithms empirically, and demonstrate their applicability in practical settings. As a more specific application, we show how our algorithms can be used to impose safety constraints on the behaviour of an agent, and compare their performance in this context with that of other constrained reinforcement learning algorithms.
Keywords:
Machine Learning: Reinforcement Learning
AI Ethics, Trust, Fairness: Safety & Robustness
Constraint Satisfaction and Optimization: Constraints and Machine Learning