I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
From MaRDI portal
Publication:1750288
DOI10.1214/17-AOS1568zbMath1392.62215arXiv1507.01037OpenAlexW2964248738WikidataQ55265427 ScholiaQ55265427MaRDI QIDQ1750288
Tong Zhang, Han Liu, Qiang Sun, Jianqing Fan
Publication date: 18 May 2018
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1507.01037
optimal rate of convergenceiteration complexityalgorithmic statisticslocal adaptive MMnonconvex statistical optimization
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Minimax procedures in statistical decision theory (62C20)
Related Items
Adaptive Huber regression on Markov-dependent data, Bayesian factor-adjusted sparse regression, Safe feature screening rules for the regularized Huber regression, Targeted Inference Involving High-Dimensional Data Using Nuisance Penalized Regression, Robust variable selection and estimation via adaptive elastic net S-estimators for linear regression, Sparse Reduced Rank Huber Regression in High Dimensions, Retire: robust expectile regression in high dimensions, Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis, High-dimensional composite quantile regression: optimal statistical guarantees and fast algorithms, Robust Signal Recovery for High-Dimensional Linear Log-Contrast Models with Compositional Covariates, Renewable Huber estimation method for streaming datasets, Resampling‐based confidence intervals for model‐free robust inference on optimal treatment regimes, Test of significance for high-dimensional longitudinal data, Hard thresholding regression, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, Sorted concave penalized regression, User-friendly covariance estimation for heavy-tailed distributions, Iteratively reweighted \(\ell_1\)-penalized robust regression, Sparse classification: a scalable discrete optimization perspective, Distributed adaptive Huber regression, Unnamed Item, Generalized high-dimensional trace regression via nuclear norm regularization, High dimensional generalized linear models for temporal dependent data
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Nearly unbiased variable selection under minimax concave penalty
- The Adaptive Lasso and Its Oracle Properties
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Minimum distance Lasso for robust high-dimensional regression
- Gradient methods for minimizing composite functions
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Statistics for high-dimensional data. Methods, theory and applications.
- Support recovery without incoherence: a case for nonconvex regularization
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- One-step sparse estimates in nonconcave penalized likelihood models
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- On the conditions used to prove oracle results for the Lasso
- Least squares after model selection in high-dimensional sparse models
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- Simultaneous analysis of Lasso and Dantzig selector
- Sparsity oracle inequalities for the Lasso
- Calibrating nonconvex penalized regression in ultra-high dimension
- Pathwise coordinate optimization
- Strong oracle optimality of folded concave penalized estimation
- Global optimality of nonconvex penalized estimators
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Smoothly Clipped Absolute Deviation on High Dimensions
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- A general theory of concave regularization for high-dimensional sparse estimation problems