Learning loopy graphical models with latent variables: efficient methods and guarantees

From MaRDI portal
Publication:355078

DOI10.1214/12-AOS1070zbMATH Open1267.62070arXiv1203.3887OpenAlexW1994400630MaRDI QIDQ355078FDOQ355078


Authors: Animashree Anandkumar, Ragupathyraj Valluvan Edit this on Wikidata


Publication date: 24 July 2013

Published in: The Annals of Statistics (Search for Journal in Brave)

Abstract: The problem of structure estimation in graphical models with latent variables is considered. We characterize conditions for tractable graph estimation and develop efficient methods with provable guarantees. We consider models where the underlying Markov graph is locally tree-like, and the model is in the regime of correlation decay. For the special case of the Ising model, the number of samples n required for structural consistency of our method scales as n=Omega(hetamindeltaeta(eta+1)2logp), where p is the number of variables, hetamin is the minimum edge potential, delta is the depth (i.e., distance from a hidden node to the nearest observed nodes), and eta is a parameter which depends on the bounds on node and edge potentials in the Ising model. Necessary conditions for structural consistency under any algorithm are derived and our method nearly matches the lower bound on sample requirements. Further, the proposed method is practical to implement and provides flexibility to control the number of latent variables and the cycle lengths in the output graph.


Full work available at URL: https://arxiv.org/abs/1203.3887




Recommendations




Cites Work


Cited In (8)

Uses Software





This page was built for publication: Learning loopy graphical models with latent variables: efficient methods and guarantees

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q355078)