LTL-Constrained Steady-State Policy Synthesis

LTL-Constrained Steady-State Policy Synthesis

Jan Křetínský

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 4104-4111. https://doi.org/10.24963/ijcai.2021/565

Decision-making policies for agents are often synthesized with the constraint that a formal specification of behaviour is satisfied. Here we focus on infinite-horizon properties. On the one hand, Linear Temporal Logic (LTL) is a popular example of a formalism for qualitative specifications. On the other hand, Steady-State Policy Synthesis (SSPS) has recently received considerable attention as it provides a more quantitative and more behavioural perspective on specifications, in terms of the frequency with which states are visited. Finally, rewards provide a classic framework for quantitative properties. In this paper, we study Markov decision processes (MDP) with the specification combining all these three types. The derived policy maximizes the reward among all policies ensuring the LTL specification with the given probability and adhering to the steady-state constraints. To this end, we provide a unified solution reducing the multi-type specification to a multi-dimensional long-run average reward. This is enabled by Limit-Deterministic Büchi Automata (LDBA), recently studied in the context of LTL model checking on MDP, and allows for an elegant solution through a simple linear programme. The algorithm also extends to the general omega-regular properties and runs in time polynomial in the sizes of the MDP as well as the LDBA.
Keywords:
Planning and Scheduling: Markov Decisions Processes
Agent-based and Multi-agent Systems: Formal Verification, Validation and Synthesis
Uncertainty in AI: Markov Decision Processes