On model selection consistency of regularized M-estimators
From MaRDI portal
Publication:2340872
DOI10.1214/15-EJS1013zbMATH Open1309.62044arXiv1305.7477MaRDI QIDQ2340872FDOQ2340872
Authors: Jason D. Lee, Yuekai Sun, Jonathan Taylor
Publication date: 21 April 2015
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Abstract: Regularized M-estimators are used in diverse areas of science and engineering to fit high-dimensional models with some low-dimensional structure. Usually the low-dimensional structure is encoded by the presence of the (unknown) parameters in some low-dimensional model subspace. In such settings, it is desirable for estimates of the model parameters to be emph{model selection consistent}: the estimates also fall in the model subspace. We develop a general framework for establishing consistency and model selection consistency of regularized M-estimators and show how it applies to some special cases of interest in statistical learning. Our analysis identifies two key properties of regularized M-estimators, referred to as geometric decomposability and irrepresentability, that ensure the estimators are consistent and model selection consistent.
Full work available at URL: https://arxiv.org/abs/1305.7477
Recommendations
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- The slow, steady ascent of a hot solid sphere in a Newtonian fluid with strongly temperature-dependent viscosity
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- scientific article; zbMATH DE number 5957408
lassonuclear norm minimizationgroup lassogeneralized lassogeometrically decomposable penaltiesregularized M-estimator
Cites Work
- Graphical models, exponential families, and variational inference
- Title not available (Why is that?)
- Statistics for high-dimensional data. Methods, theory and applications.
- Local behavior of sparse analysis regularization: applications to risk estimation
- High-dimensional graphs and variable selection with the Lasso
- Restricted eigenvalue properties for correlated Gaussian designs
- Title not available (Why is that?)
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- The solution path of the generalized lasso
- Sparse inverse covariance estimation with the graphical lasso
- Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- Latent variable graphical model selection via convex optimization
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- Structure estimation for discrete graphical models: generalized covariance matrices and their inverses
- Reconstruction From Anisotropic Random Measurements
- Estimating time-varying networks
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- Consistency of trace norm minimization
- Perturbation Bounds of Unitary and Subunitary Polar Factors
- Weakly decomposable regularization penalties and structured sparsity
- The convex geometry of linear inverse problems
- Atomic decomposition by basis pursuit
- Support union recovery in high-dimensional multivariate regression
- Simple bounds for recovering low-complexity models
Cited In (16)
- Consistent model selection procedure for general integer-valued time series
- The slow, steady ascent of a hot solid sphere in a Newtonian fluid with strongly temperature-dependent viscosity
- The generalized Lasso problem and uniqueness
- Network classification with applications to brain connectomics
- Identifying Brain Hierarchical Structures Associated with Alzheimer's Disease Using a Regularized Regression Method with Tree Predictors
- Subbotin graphical models for extreme value dependencies with applications to functional neuronal connectivity
- Understanding Implicit Regularization in Over-Parameterized Single Index Model
- Factor-Adjusted Regularized Model Selection
- Investigating competition in financial markets: a sparse autologistic model for dynamic network data
- A tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regression
- A parallel algorithm for ridge-penalized estimation of the multivariate exponential family from data of mixed types
- Pairwise sparse + low-rank models for variables of mixed type
- Sparse regression for extreme values
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Variable selection in high dimensional linear regressions with parameter instability
- Sparse Poisson regression with penalized weighted score function
Uses Software
This page was built for publication: On model selection consistency of regularized M-estimators
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2340872)