On model selection consistency of regularized M-estimators
From MaRDI portal
Publication:2340872
Abstract: Regularized M-estimators are used in diverse areas of science and engineering to fit high-dimensional models with some low-dimensional structure. Usually the low-dimensional structure is encoded by the presence of the (unknown) parameters in some low-dimensional model subspace. In such settings, it is desirable for estimates of the model parameters to be emph{model selection consistent}: the estimates also fall in the model subspace. We develop a general framework for establishing consistency and model selection consistency of regularized M-estimators and show how it applies to some special cases of interest in statistical learning. Our analysis identifies two key properties of regularized M-estimators, referred to as geometric decomposability and irrepresentability, that ensure the estimators are consistent and model selection consistent.
Recommendations
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- The slow, steady ascent of a hot solid sphere in a Newtonian fluid with strongly temperature-dependent viscosity
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- scientific article; zbMATH DE number 5957408
Cites work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Atomic decomposition by basis pursuit
- Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem
- Consistency of trace norm minimization
- Estimating time-varying networks
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Graphical models, exponential families, and variational inference
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- High-dimensional graphs and variable selection with the Lasso
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- Latent variable graphical model selection via convex optimization
- Local behavior of sparse analysis regularization: applications to risk estimation
- Perturbation Bounds of Unitary and Subunitary Polar Factors
- Reconstruction From Anisotropic Random Measurements
- Restricted eigenvalue properties for correlated Gaussian designs
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Simple bounds for recovering low-complexity models
- Sparse inverse covariance estimation with the graphical lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- Structure estimation for discrete graphical models: generalized covariance matrices and their inverses
- Support union recovery in high-dimensional multivariate regression
- The convex geometry of linear inverse problems
- The solution path of the generalized lasso
- Weakly decomposable regularization penalties and structured sparsity
Cited in
(16)- Consistent model selection procedure for general integer-valued time series
- The slow, steady ascent of a hot solid sphere in a Newtonian fluid with strongly temperature-dependent viscosity
- The generalized Lasso problem and uniqueness
- Network classification with applications to brain connectomics
- Identifying Brain Hierarchical Structures Associated with Alzheimer's Disease Using a Regularized Regression Method with Tree Predictors
- Subbotin graphical models for extreme value dependencies with applications to functional neuronal connectivity
- Understanding Implicit Regularization in Over-Parameterized Single Index Model
- Factor-Adjusted Regularized Model Selection
- Investigating competition in financial markets: a sparse autologistic model for dynamic network data
- A tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regression
- A parallel algorithm for ridge-penalized estimation of the multivariate exponential family from data of mixed types
- Pairwise sparse + low-rank models for variables of mixed type
- Sparse regression for extreme values
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Variable selection in high dimensional linear regressions with parameter instability
- Sparse Poisson regression with penalized weighted score function
This page was built for publication: On model selection consistency of regularized M-estimators
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2340872)