The graphical lasso: new insights and alternatives

From MaRDI portal
Publication:1950894

DOI10.1214/12-EJS740zbMATH Open1295.62066arXiv1111.5479OpenAlexW2105760337WikidataQ41957694 ScholiaQ41957694MaRDI QIDQ1950894FDOQ1950894


Authors: Rahul Mazumder, Trevor Hastie Edit this on Wikidata


Publication date: 28 May 2013

Published in: Electronic Journal of Statistics (Search for Journal in Brave)

Abstract: The graphical lasso citep{FHT2007a} is an algorithm for learning the structure in an undirected Gaussian graphical model, using ell1 regularization to control the number of zeros in the precision matrix BTheta=BSigma1 citep{BGA2008,yuan_lin_07}. The { exttt R} package GL citep{FHT2007a} is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of GL can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform GL. By studying the "normal equations" we see that, GL is solving the {em dual} of the graphical lasso penalized likelihood, by block coordinate ascent; a result which can also be found in cite{BGA2008}. In this dual, the target of estimation is BSigma, the covariance matrix, rather than the precision matrix BTheta. We propose similar primal algorithms PGL and DPGL, that also operate by block-coordinate descent, where BTheta is the optimization target. We study all of these algorithms, and in particular different approaches to solving their coordinate sub-problems. We conclude that DPGL is superior from several points of view.


Full work available at URL: https://arxiv.org/abs/1111.5479




Recommendations




Cites Work


Cited In (50)

Uses Software





This page was built for publication: The graphical lasso: new insights and alternatives

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1950894)