Data-Driven Invariant Learning for Probabilistic Programs (Extended Abstract)

Data-Driven Invariant Learning for Probabilistic Programs (Extended Abstract)

Jialu Bao, Nitesh Trivedi, Drashti Pathak, Justin Hsu, Subhajit Roy

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Sister Conferences Best Papers. Pages 6415-6419. https://doi.org/10.24963/ijcai.2023/712

The weakest pre-expectation framework from Morgan and McIver for deductive verification of probabilistic programs generalizes binary state assertions to real-valued expectations to measure expected values of expressions over probabilistic program variables. While loop-free programs can be analyzed by mechanically transforming expectations, verifying programs with loops requires finding an invariant expectation. We view invariant expectation synthesis as a regression problem: given an input state, predict the average value of the post-expectation in the output distribution. With this perspective, we develop the first data-driven invariant synthesis method for probabilistic programs. Unlike prior work on probabilistic invariant inference, our approach learns piecewise continuous invariants without relying on template expectations. We also develop a data-driven approach to learn sub-invariants from data, which can be used to upper- or lower-bound expected values. We implement our approaches and demonstrate their effectiveness on a variety of benchmarks from the probabilistic programming literature.
Keywords:
Sister Conferences Best Papers: Constraint Satisfaction and Optimization
Sister Conferences Best Papers: Knowledge Representation and Reasoning
Sister Conferences Best Papers: Uncertainty in AI
Sister Conferences Best Papers: Search