Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
From MaRDI portal
Abstract: We provide theoretical analysis of the statistical and computational properties of penalized -estimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization and sparse elliptical random design regression. For these problems, it is intractable to calculate the global solution due to the nonconvex formulation. In this paper, we propose an approximate regularization path-following method for solving a variety of learning problems with nonconvex objective functions. Under a unified analytic framework, we simultaneously provide explicit statistical and computational rates of convergence for any local solution attained by the algorithm. Computationally, our algorithm attains a global geometric rate of convergence for calculating the full regularization path, which is optimal among all first-order algorithms. Unlike most existing methods that only attain geometric rates of convergence for one single regularization parameter, our algorithm calculates the full regularization path with the same iteration complexity. In particular, we provide a refined iteration complexity bound to sharply characterize the performance of each stage along the regularization path. Statistically, we provide sharp sample complexity analysis for all the approximate local solutions along the regularization path. In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator. These results show that the final estimator attains an oracle statistical property due to the usage of nonconvex penalty.
Recommendations
- Convergence rates for regularization with sparsity constraints
- Optimal rates of convergence for sparse covariance matrix estimation
- A unified approach to convergence rates for \(\ell^{1}\)-regularization and lacking sparsity
- Convergence rates in \(\ell^1\)-regularization if the sparsity assumption fails
- The Convergence Guarantees of a Non-Convex Approach for Sparse Recovery
- Stochastic proximal methods for non-smooth non-convex constrained sparse optimization
- Nonconvex Sparse Regularization for Deep Neural Networks and Its Optimality
- Estimating sparse precision matrix: optimal rates of convergence and adaptive estimation
- Sparse learning for large-scale and high-dimensional data: a randomized convex-concave optimization approach
- Sparse recovery by non-convex optimization - instance optimality
Cites work
- scientific article; zbMATH DE number 5957245 (Why is no real title available?)
- scientific article; zbMATH DE number 5957506 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A general theory of concave regularization for high-dimensional sparse estimation problems
- A proximal-gradient homotopy method for the sparse least-squares problem
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors
- Analysis of multi-stage convex relaxation for sparse regularization
- Calibrating nonconvex penalized regression in ultra-high dimension
- Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection
- Decoding by Linear Programming
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- Gradient methods for minimizing composite functions
- High-dimensional generalized linear models and the lasso
- Introductory lectures on convex optimization. A basic course.
- Iterative hard thresholding for compressed sensing
- Least angle regression. (With discussion)
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Multi-stage convex relaxation for feature selection
- Nearly unbiased variable selection under minimax concave penalty
- One-step sparse estimates in nonconcave penalized likelihood models
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Piecewise linear regularized solution paths
- Quantile Regression for Analyzing Heterogeneity in Ultra-High Dimension
- Restricted eigenvalue properties for correlated Gaussian designs
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Simultaneous analysis of Lasso and Dantzig selector
- Smoothly clipped absolute deviation on high dimensions
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Sparse Reconstruction by Separable Approximation
- Sparse permutation invariant covariance estimation
- SparseNet: coordinate descent with nonconvex penalties
- Sparsity in penalized empirical risk minimization
- Strong oracle optimality of folded concave penalized estimation
- Structure estimation for discrete graphical models: generalized covariance matrices and their inverses
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Thresholding-based iterative selection procedures for model selection and shrinkage
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Variable selection using MM algorithms
Cited in
(56)- Inference for high-dimensional linear expectile regression with de-biasing method
- Accelerated gradient methods for sparse statistical learning with nonconvex penalties
- Best subset selection for high-dimensional non-smooth models using iterative hard thresholding
- Path-following methods for maximum a posteriori estimators in Bayesian hierarchical models: how estimates depend on hyperparameters
- Fully polynomial-time randomized approximation schemes for global optimization of high-dimensional minimax concave penalized generalized linear models
- Penalised robust estimators for sparse and high-dimensional linear models
- Hard thresholding regularised logistic regression: theory and algorithms
- Efficient learning with a family of nonconvex regularizers by redistributing nonconvexity
- Computational and statistical tradeoffs via convex relaxation
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee
- A unifying framework of high-dimensional sparse estimation with difference-of-convex (DC) regularizations
- Hypothesis testing in large-scale functional linear regression
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Approximate message passing for nonconvex sparse regularization with stability and asymptotic analysis
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks
- Bias versus non-convexity in compressed sensing
- A theoretical understanding of self-paced learning
- Minimum average variance estimation with group Lasso for the multivariate response central mean subspace
- A unified primal dual active set algorithm for nonconvex sparse recovery
- Global solutions to folded concave penalized nonconvex learning
- Accelerate the warm-up stage in the Lasso computation via a homotopic approach
- Distributed testing and estimation under sparse high dimensional models
- Model selection in high-dimensional quantile regression with seamless \(L_0\) penalty
- Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
- Dynamic behavior analysis via structured rank minimization
- Covariate-Assisted Sparse Tensor Completion
- Optimal sparsity testing in linear regression model
- Sparse Generalized Eigenvalue Problem: Optimal Statistical Rates via Truncated Rayleigh Flow
- Support recovery without incoherence: a case for nonconvex regularization
- Misspecified nonconvex statistical optimization for sparse phase retrieval
- The spike-and-slab LASSO
- On semiparametric exponential family graphical models
- An unbiased approach to compressed sensing
- Penalized Estimation of Frailty-Based Illness–Death Models for Semi-Competing Risks
- Nonconvex regularization for sparse neural networks
- Computational and statistical analyses for robust non-convex sparse regularized regression problem
- Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning
- Analysis of multi-stage convex relaxation for sparse regularization
- Preface
- Pathwise coordinate optimization for sparse learning: algorithm and theory
- Fully Bayesian logistic regression with hyper-LASSO priors for high-dimensional feature selection
- Sorted concave penalized regression
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- Point source super-resolution via non-convex \(L_1\) based methods
- scientific article; zbMATH DE number 6982301 (Why is no real title available?)
- Difference-of-convex learning: directional stationarity, optimality, and sparsity
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- A primal and dual active set algorithm for truncated \(L_1\) regularized logistic regression
- An outer-inner linearization method for non-convex and nondifferentiable composite regularization problems
- Rate-optimal posterior contraction for sparse PCA
- scientific article; zbMATH DE number 7370581 (Why is no real title available?)
- Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses
- Penalized wavelet nonparametric univariate logistic regression for irregular spaced data
This page was built for publication: Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q482875)