Missing values: sparse inverse covariance estimation and an extension to sparse regression
From MaRDI portal
(Redirected from Publication:80804)
Abstract: We propose an l1-regularized likelihood method for estimating the inverse covariance matrix in the high-dimensional multivariate normal model in presence of missing data. Our method is based on the assumption that the data are missing at random (MAR) which entails also the completely missing at random case. The implementation of the method is non-trivial as the observed negative log-likelihood generally is a complicated and non-convex function. We propose an efficient EM algorithm for optimization with provable numerical convergence properties. Furthermore, we extend the methodology to handle missing values in a sparse regression context. We demonstrate both methods on simulated and real data.
Recommendations
- High-dimensional covariance matrix estimation with missing observations
- Minimax rate-optimal estimation of high-dimensional covariance matrices with incomplete data
- An Imputation–Regularized Optimization Algorithm for High Dimensional Missing Data Problems and Beyond
- Minimax optimal estimation of high-dimensional sparse covariance matrices with missing data
- Transposable regularized covariance models with an application to missing data imputation
Cites work
- scientific article; zbMATH DE number 3165037 (Why is no real title available?)
- scientific article; zbMATH DE number 4088699 (Why is no real title available?)
- scientific article; zbMATH DE number 3567782 (Why is no real title available?)
- scientific article; zbMATH DE number 1294360 (Why is no real title available?)
- scientific article; zbMATH DE number 1134987 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A note on the Lasso for Gaussian graphical model selection
- Convergence of a block coordinate descent method for nondifferentiable minimization
- Covariance-regularized regression and classification for high dimensional problems
- High-dimensional graphs and variable selection with the Lasso
- Model selection and estimation in the Gaussian graphical model
- Model selection criteria for missing-data problems using the EM algorithm
- Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data
- On the convergence properties of the EM algorithm
- Pathwise coordinate optimization
- Sparse inverse covariance estimation with the graphical lasso
- Sparse permutation invariant covariance estimation
- \(\ell_{1}\)-penalization for mixture regression models
Cited in
(24)- Minimax optimal estimation of high-dimensional sparse covariance matrices with missing data
- Transposable regularized covariance models with an application to missing data imputation
- Learning causal structure from mixed data with missing values using Gaussian copula models
- Variable selection for high‐dimensional generalized linear model with block‐missing data
- Pattern alternating maximization algorithm for missing data in high-dimensional problems
- scientific article; zbMATH DE number 7750672 (Why is no real title available?)
- Estimating high-dimensional covariance and precision matrices under general missing dependence
- Sparse precision matrix estimation with missing observations
- Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression
- Minimax rate-optimal estimation of high-dimensional covariance matrices with incomplete data
- Concentration of measure bounds for matrix-variate data with missing values
- Sparse multivariate regression with missing values and its application to the prediction of material properties
- Sensitivity analysis for inference with partially identifiable covariance matrices
- Extending graphical models for applications: on covariates, missingness and normality
- cglasso
- Change-Point Detection for Graphical Models in the Presence of Missing Values
- The conditional censored graphical Lasso estimator
- Graphical Model Inference with Erosely Measured Data
- L 0 -regularization for high-dimensional regression with corrupted data
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- Optimal sparse linear prediction for block-missing multi-modality data without imputation
- Scalable interpretable learning for multi-response error-in-variables regression
- An ensemble learning method for variable selection: application to high-dimensional data and missing values
- A penalized EM algorithm incorporating missing data mechanism for Gaussian parameter estimation
This page was built for publication: Missing values: sparse inverse covariance estimation and an extension to sparse regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q80804)