Cutset Bayesian Networks: A New Representation for Learning Rao-Blackwellised Graphical Models

Cutset Bayesian Networks: A New Representation for Learning Rao-Blackwellised Graphical Models

Tahrima Rahman, Shasha Jin, Vibhav Gogate

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 5751-5757. https://doi.org/10.24963/ijcai.2019/797

Recently there has been growing interest in learning probabilistic models that admit poly-time inference called tractable probabilistic models from data. Although they generalize poorly as compared to intractable models, they often yield more accurate estimates at prediction time. In this paper, we seek to further explore this trade-off between generalization performance and inference accuracy by proposing a novel, partially tractable representation called cutset Bayesian networks (CBNs). The main idea in CBNs is to partition the variables into two subsets X and Y, learn a (intractable) Bayesian network that represents P(X) and a tractable conditional model that represents P(Y|X). The hope is that the intractable model will help improve generalization while the tractable model, by leveraging Rao-Blackwellised sampling which combines exact inference and sampling, will help improve the prediction accuracy. To compactly model P(Y|X), we introduce a novel tractable representation called conditional cutset networks (CCNs) in which all conditional probability distributions are represented using calibrated classifiers—classifiers which typically yield higher quality probability estimates than conventional classifiers. We show via a rigorous experimental evaluation that CBNs and CCNs yield more accurate posterior estimates than their tractable as well as intractable counterparts.
Keywords:
Uncertainty in AI: Approximate Probabilistic Inference
Uncertainty in AI: Bayesian Networks
Uncertainty in AI: Exact Probabilistic Inference
Machine Learning: Learning Graphical Models