Probabilistic Rule Induction from Event Sequences with Logical Summary Markov Models

Probabilistic Rule Induction from Event Sequences with Logical Summary Markov Models

Debarun Bhattacharjya, Oktie Hassanzadeh, Ronny Luss, Keerthiram Murugesan

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 5667-5675. https://doi.org/10.24963/ijcai.2023/629

Event sequences are widely available across application domains and there is a long history of models for representing and analyzing such datasets. Summary Markov models are a recent addition to the literature that help identify the subset of event types that influence event types of interest to a user. In this paper, we introduce logical summary Markov models, which are a family of models for event sequences that enable interpretable predictions through logical rules that relate historical predicates to the probability of observing an event type at any arbitrary position in the sequence. We illustrate their connection to prior parametric summary Markov models as well as probabilistic logic programs, and propose new models from this family along with efficient greedy search algorithms for learning them from data. The proposed models outperform relevant baselines on most datasets in an empirical investigation on a probabilistic prediction task. We also compare the number of influencers that various logical summary Markov models learn on real-world datasets, and conduct a brief exploratory qualitative study to gauge the promise of such symbolic models around guiding large language models for predicting societal events.
Keywords:
Uncertainty in AI: UAI: Tractable probabilistic models
Data Mining: DM: Mining spatial and/or temporal data
Machine Learning: ML: Time series and data streams