High-dimensional learning of linear causal networks via inverse covariance estimation
From MaRDI portal
Publication:2934130
zbMATH Open1318.68148arXiv1311.3492MaRDI QIDQ2934130FDOQ2934130
Authors: Po-Ling Loh, Peter Bühlmann
Publication date: 8 December 2014
Abstract: We establish a new framework for statistical estimation of directed acyclic graphs (DAGs) when data are generated from a linear, possibly non-Gaussian structural equation model. Our framework consists of two parts: (1) inferring the moralized graph from the support of the inverse covariance matrix; and (2) selecting the best-scoring graph amongst DAGs that are consistent with the moralized graph. We show that when the error variances are known or estimated to close enough precision, the true DAG is the unique minimizer of the score computed using the reweighted squared l_2-loss. Our population-level results have implications for the identifiability of linear SEMs when the error covariances are specified up to a constant multiple. On the statistical side, we establish rigorous conditions for high-dimensional consistency of our two-part algorithm, defined in terms of a "gap" between the true DAG and the next best candidate. Finally, we demonstrate that dynamic programming may be used to select the optimal DAG in linear time when the treewidth of the moralized graph is bounded.
Full work available at URL: https://arxiv.org/abs/1311.3492
Recommendations
- Learning high-dimensional Gaussian linear structural equation models with heterogeneous error variances
- High-dimensional causal discovery under non-Gaussianity
- Learning high-dimensional directed acyclic graphs with latent and selection variables
- Learning causal networks via additive faithfulness
- The reduced PC-algorithm: improved causal structure learning in large random networks
causal inferencedynamic programmingidentifiabilitycausal networkslinear structural equation modelsinverse covariance matrix estimation
Learning and adaptive systems in artificial intelligence (68T05) Applications of graph theory (05C90) Estimation in multivariate analysis (62H12)
Cited In (28)
- Learning high-dimensional directed acyclic graphs with latent and selection variables
- Learning high-dimensional Gaussian linear structural equation models with heterogeneous error variances
- Causal network learning with non-invertible functional relationships
- Causal Dantzig: fast inference in linear structural equation models with hidden variables under additive interventions
- \(\mathsf{PenPC}\): a two-step approach to estimate the skeletons of high-dimensional directed acyclic graphs
- High-dimensional joint estimation of multiple directed Gaussian graphical models
- The reduced PC-algorithm: improved causal structure learning in large random networks
- Learning quadratic variance function (QVF) DAG models via overdispersion scoring (ODS)
- High-dimensional Poisson structural equation model learning via \(\ell_1\)-regularized regression
- Learning causal networks via additive faithfulness
- Identifiability of additive noise models using conditional variances
- Sparse inverse covariance matrix estimation via the \(\ell_0\)-norm with Tikhonov regularization
- Computationally Efficient Learning of Gaussian Linear Structural Equation Models with Equal Error Variances
- Title not available (Why is that?)
- Robust estimation of Gaussian linear structural equation models with equal error variances
- Title not available (Why is that?)
- CAM: causal additive models, high-dimensional order search and penalized regression
- Marginal integration for nonparametric causal inference
- Feedback and mediation in causal inference illustrated by stochastic process models
- Identifiability of Gaussian linear structural equation models with homogeneous and heterogeneous error variances
- Identifiability of homoscedastic linear structural equation models using algebraic matroids
- Spectral Bayesian network theory
- A causal discovery algorithm based on the prior selection of leaf nodes
- Nonparametric and high-dimensional functional graphical models
- A review of Gaussian Markov models for conditional independence
- Densely connected sub-Gaussian linear structural equation model learning via \(\ell_1\)- and \(\ell_2\)-regularized regressions
- The three faces of faithfulness
- High-dimensional causal discovery under non-Gaussianity
This page was built for publication: High-dimensional learning of linear causal networks via inverse covariance estimation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2934130)