Fast Sparse Gaussian Markov Random Fields Learning Based on Cholesky Factorization

Fast Sparse Gaussian Markov Random Fields Learning Based on Cholesky Factorization

Ivan Stojkovic, Vladisav Jelisavcic, Veljko Milutinovic, Zoran Obradovic

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 2758-2764. https://doi.org/10.24963/ijcai.2017/384

Learning the sparse Gaussian Markov Random Field, or conversely, estimating the sparse inverse covariance matrix is an approach to uncover the underlying dependency structure in data. Most of the current methods solve the problem by optimizing the maximum likelihood objective with a Laplace prior L1 on entries of a precision matrix. We propose a novel objective with a regularization term which penalizes an approximate product of the Cholesky decomposed precision matrix. This new reparametrization of the penalty term allows efficient coordinate descent optimization, which in synergy with an active set approach results in a very fast and efficient method for learning the sparse inverse covariance matrix. We evaluated the speed and solution quality of the newly proposed SCHL method on problems consisting of up to 24,840 variables. Our approach was several times faster than three state-of-the-art approaches. We also demonstrate that SCHL can be used to discover interpretable networks, by applying it to a high impact problem from the health informatics domain.
Keywords:
Machine Learning: Learning Graphical Models
Multidisciplinary Topics and Applications: Computational Biology and e-Health
Machine Learning: Unsupervised Learning