Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
Publication:482875
DOI10.1214/14-AOS1238zbMath1302.62066arXiv1306.4960OpenAlexW3103820806WikidataQ43079370 ScholiaQ43079370MaRDI QIDQ482875
Zhaoran Wang, Tong Zhang, Han Liu
Publication date: 6 January 2015
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1306.4960
path-following methodgeometric computational ratenonconvex regularized \(M\)-estimationoptimal statistical rate
Parametric inference under constraints (62F30) Generalized linear models (logistic models) (62J12) Nonconvex programming, global optimization (90C26) Methods of reduced gradient type (90C52)
Related Items (41)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Nearly unbiased variable selection under minimax concave penalty
- Gradient methods for minimizing composite functions
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Iterative hard thresholding for compressed sensing
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Sparsity in penalized empirical risk minimization
- One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Introductory lectures on convex optimization. A basic course.
- Least angle regression. (With discussion)
- An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors
- Sparse permutation invariant covariance estimation
- Thresholding-based iterative selection procedures for model selection and shrinkage
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Multi-stage convex relaxation for feature selection
- Calibrating nonconvex penalized regression in ultra-high dimension
- Structure estimation for discrete graphical models: generalized covariance matrices and their inverses
- Strong oracle optimality of folded concave penalized estimation
- Variable selection using MM algorithms
- Piecewise linear regularized solution paths
- A Proximal-Gradient Homotopy Method for the Sparse Least-Squares Problem
- SparseNet: Coordinate Descent With Nonconvex Penalties
- Decoding by Linear Programming
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sparse Reconstruction by Separable Approximation
- Quantile Regression for Analyzing Heterogeneity in Ultra-High Dimension
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Smoothly Clipped Absolute Deviation on High Dimensions
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- A general theory of concave regularization for high-dimensional sparse estimation problems
This page was built for publication: Optimal computational and statistical rates of convergence for sparse nonconvex learning problems