Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
DOI10.1214/14-AOS1238zbMATH Open1302.62066arXiv1306.4960OpenAlexW3103820806WikidataQ43079370 ScholiaQ43079370MaRDI QIDQ482875FDOQ482875
Authors: Zhaoran Wang, Han Liu, Tong Zhang
Publication date: 6 January 2015
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1306.4960
Recommendations
- Convergence rates for regularization with sparsity constraints
- Optimal rates of convergence for sparse covariance matrix estimation
- A unified approach to convergence rates for \(\ell^{1}\)-regularization and lacking sparsity
- Convergence rates in \(\ell^1\)-regularization if the sparsity assumption fails
- The Convergence Guarantees of a Non-Convex Approach for Sparse Recovery
- scientific article; zbMATH DE number 7370632
- Nonconvex Sparse Regularization for Deep Neural Networks and Its Optimality
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- Sparse learning for large-scale and high-dimensional data: a randomized convex-concave optimization approach
- Sparse recovery by non-convex optimization - instance optimality
path-following methodgeometric computational ratenonconvex regularized \(M\)-estimationoptimal statistical rate
Generalized linear models (logistic models) (62J12) Parametric inference under constraints (62F30) Nonconvex programming, global optimization (90C26) Methods of reduced gradient type (90C52)
Cites Work
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Nearly unbiased variable selection under minimax concave penalty
- SparseNet: coordinate descent with nonconvex penalties
- Least angle regression. (With discussion)
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Title not available (Why is that?)
- One-step sparse estimates in nonconcave penalized likelihood models
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Analysis of multi-stage convex relaxation for sparse regularization
- Restricted eigenvalue properties for correlated Gaussian designs
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Introductory lectures on convex optimization. A basic course.
- Sparse permutation invariant covariance estimation
- Variable selection using MM algorithms
- Piecewise linear regularized solution paths
- Gradient methods for minimizing composite functions
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Structure estimation for discrete graphical models: generalized covariance matrices and their inverses
- Title not available (Why is that?)
- Decoding by Linear Programming
- Quantile Regression for Analyzing Heterogeneity in Ultra-High Dimension
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Smoothly clipped absolute deviation on high dimensions
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Sparse Reconstruction by Separable Approximation
- Iterative hard thresholding for compressed sensing
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Title not available (Why is that?)
- An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors
- Thresholding-based iterative selection procedures for model selection and shrinkage
- Calibrating nonconvex penalized regression in ultra-high dimension
- Strong oracle optimality of folded concave penalized estimation
- Sparsity in penalized empirical risk minimization
- Multi-stage convex relaxation for feature selection
- A proximal-gradient homotopy method for the sparse least-squares problem
- Fast global convergence of gradient methods for high-dimensional statistical recovery
Cited In (55)
- Path-following methods for maximum a posteriori estimators in Bayesian hierarchical models: how estimates depend on hyperparameters
- Inference for high-dimensional linear expectile regression with de-biasing method
- Best subset selection for high-dimensional non-smooth models using iterative hard thresholding
- Fully polynomial-time randomized approximation schemes for global optimization of high-dimensional minimax concave penalized generalized linear models
- Support recovery without incoherence: a case for nonconvex regularization
- An unbiased approach to compressed sensing
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- Bias versus non-convexity in compressed sensing
- Sorted concave penalized regression
- Covariate-Assisted Sparse Tensor Completion
- Penalized Estimation of Frailty-Based Illness–Death Models for Semi-Competing Risks
- Title not available (Why is that?)
- Nonconvex regularization for sparse neural networks
- Difference-of-convex learning: directional stationarity, optimality, and sparsity
- Hard thresholding regularised logistic regression: theory and algorithms
- Global solutions to folded concave penalized nonconvex learning
- Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
- The spike-and-slab LASSO
- Pathwise coordinate optimization for sparse learning: algorithm and theory
- Distributed testing and estimation under sparse high dimensional models
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- Optimal sparsity testing in linear regression model
- Analysis of multi-stage convex relaxation for sparse regularization
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Penalised robust estimators for sparse and high-dimensional linear models
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- Minimum average variance estimation with group Lasso for the multivariate response central mean subspace
- A unified primal dual active set algorithm for nonconvex sparse recovery
- Preface
- Efficient learning with a family of nonconvex regularizers by redistributing nonconvexity
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- Computational and statistical tradeoffs via convex relaxation
- A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations
- Point source super-resolution via non-convex \(L_1\) based methods
- An outer-inner linearization method for non-convex and nondifferentiable composite regularization problems
- On semiparametric exponential family graphical models
- Model selection in high-dimensional quantile regression with seamless \(L_0\) penalty
- Misspecified nonconvex statistical optimization for sparse phase retrieval
- Hypothesis testing in large-scale functional linear regression
- Fully Bayesian logistic regression with hyper-LASSO priors for high-dimensional feature selection
- A theoretical understanding of self-paced learning
- Sparse Generalized Eigenvalue Problem: Optimal Statistical Rates via Truncated Rayleigh Flow
- Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning
- A primal and dual active set algorithm for truncated \(L_1\) regularized logistic regression
- Rate-optimal posterior contraction for sparse PCA
- Title not available (Why is that?)
- Accelerate the warm-up stage in the Lasso computation via a homotopic approach
- Dynamic behavior analysis via structured rank minimization
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee
- Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis
- High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks
- Computational and statistical analyses for robust non-convex sparse regularized regression problem
- Penalized wavelet nonparametric univariate logistic regression for irregular spaced data
- Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses
Uses Software
This page was built for publication: Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q482875)