On model selection consistency of regularized M-estimators
From MaRDI portal
Publication:2340872
DOI10.1214/15-EJS1013zbMath1309.62044arXiv1305.7477MaRDI QIDQ2340872
Jonathan E. Taylor, Jason D. Lee, Yuekai Sun
Publication date: 21 April 2015
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1305.7477
group lassolassonuclear norm minimizationgeneralized lassogeometrically decomposable penaltiesregularized M-estimator
Related Items
Investigating competition in financial markets: a sparse autologistic model for dynamic network data, Consistent model selection procedure for general integer-valued time series, Pairwise sparse + low-rank models for variables of mixed type, Identifying Brain Hierarchical Structures Associated with Alzheimer's Disease Using a Regularized Regression Method with Tree Predictors, Subbotin graphical models for extreme value dependencies with applications to functional neuronal connectivity, Understanding Implicit Regularization in Over-Parameterized Single Index Model, A tradeoff between false discovery and true positive proportions for sparse high-dimensional logistic regression, The generalized Lasso problem and uniqueness, Network classification with applications to brain connectomics, Factor-Adjusted Regularized Model Selection, A parallel algorithm for ridge-penalized estimation of the multivariate exponential family from data of mixed types, Sparse regression for extreme values, Sparse Poisson regression with penalized weighted score function
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sparse inverse covariance estimation with the graphical lasso
- Latent variable graphical model selection via convex optimization
- Simple bounds for recovering low-complexity models
- Statistics for high-dimensional data. Methods, theory and applications.
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem
- The solution path of the generalized lasso
- High-dimensional Ising model selection using \(\ell _{1}\)-regularized logistic regression
- Estimating time-varying networks
- The convex geometry of linear inverse problems
- Honest variable selection in linear and logistic regression models via \(\ell _{1}\) and \(\ell _{1}+\ell _{2}\) penalization
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- Local behavior of sparse analysis regularization: applications to risk estimation
- Support union recovery in high-dimensional multivariate regression
- Structure estimation for discrete graphical models: generalized covariance matrices and their inverses
- High-dimensional graphs and variable selection with the Lasso
- Atomic Decomposition by Basis Pursuit
- Reconstruction From Anisotropic Random Measurements
- Graphical Models, Exponential Families, and Variational Inference
- Perturbation Bounds of Unitary and Subunitary Polar Factors
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Weakly decomposable regularization penalties and structured sparsity
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers