Regularized M-estimators with nonconvexity: statistical and algorithmic theory for local optima
From MaRDI portal
Publication:5502126
Abstract: We provide novel theoretical results regarding local optima of regularized -estimators, allowing for nonconvexity in both loss and penalty functions. Under restricted strong convexity on the loss and suitable regularity conditions on the penalty, we prove that emph{any stationary point} of the composite objective function will lie within statistical precision of the underlying parameter vector. Our theory covers many nonconvex objective functions of interest, including the corrected Lasso for errors-in-variables linear models; regression for generalized linear models with nonconvex penalties such as SCAD, MCP, and capped-; and high-dimensional graphical model estimation. We quantify statistical accuracy by providing bounds on the -, -, and prediction error between stationary points and the population-level optimum. We also propose a simple modification of composite gradient descent that may be used to obtain a near-global optimum within statistical precision in steps, which is the fastest possible rate of any first-order method. We provide simulation studies illustrating the sharpness of our theoretical results.
Recommendations
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- Support recovery without incoherence: a case for nonconvex regularization
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
Cited in
(only showing first 100 items - show all)- Model-free nonconvex matrix completion: local minima analysis and applications in memory-efficient kernel PCA
- Variable Selection With Second-Generation P-Values
- Wavelet-based robust estimation and variable selection in nonparametric additive models
- On the finite-sample analysis of \(\Theta\)-estimators
- Sparse Laplacian shrinkage for nonparametric transformation survival model
- Hard thresholding regression
- Best subset selection for high-dimensional non-smooth models using iterative hard thresholding
- Targeted random projection for prediction from high-dimensional features
- Matrix completion with nonconvex regularization: spectral operators and scalable algorithms
- Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning
- Differentially private inference via noisy optimization
- Asymptotic properties on high-dimensional multivariate regression M-estimation
- The landscape of empirical risk for nonconvex losses
- Statistical analysis of sparse approximate factor models
- A primal and dual active set algorithm for truncated \(L_1\) regularized logistic regression
- Penalized wavelet estimation and robust denoising for irregular spaced data
- Model-Assisted Uniformly Honest Inference for Optimal Treatment Regimes in High Dimension
- scientific article; zbMATH DE number 6982301 (Why is no real title available?)
- Going beyond oracle property: selection consistency and uniqueness of local solution of the generalized linear model
- Non-convex penalized multitask regression using data depth-based penalties
- Analysis of global and local optima of regularized quantile regression in high dimensions: a subgradient approach
- Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators
- GSDAR: a fast Newton algorithm for \(\ell_0\) regularized generalized linear models with statistical guarantee
- Computational and statistical analyses for robust non-convex sparse regularized regression problem
- High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks
- Nonregular and minimax estimation of individualized thresholds in high dimension with binary responses
- An improved algorithm for high-dimensional continuous threshold expectile model with variance heterogeneity
- ROCKET: robust confidence intervals via Kendall's tau for transelliptical graphical models
- Penalized wavelet nonparametric univariate logistic regression for irregular spaced data
- High-dimensional linear regression with hard thresholding regularization: theory and algorithm
- D4R: doubly robust reduced rank regression in high dimension
- Numerical characterization of support recovery in sparse regression with correlated design
- Support recovery without incoherence: a case for nonconvex regularization
- Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
- Functional additive regression
- Bias versus non-convexity in compressed sensing
- Restricted strong convexity implies weak submodularity
- Finite-sample analysis of \(M\)-estimators using self-concordance
- Sparse M-estimators in semi-parametric copula models
- Sorted concave penalized regression
- A New Principle for Tuning-Free Huber Regression
- Non-convex projected gradient descent for generalized low-rank tensor regression
- Penalized Estimation of Frailty-Based Illness–Death Models for Semi-Competing Risks
- A general family of trimmed estimators for robust high-dimensional data analysis
- Group sparse structural smoothing recovery: model, statistical properties and algorithm
- A generalized formulation for group selection via ADMM
- A few theoretical results for Laplace and arctan penalized ordinary least squares linear regression estimators
- Regularized distributionally robust optimization with application to the index tracking problem
- Nonconvex regularization for sparse neural networks
- Retire: robust expectile regression in high dimensions
- Projection Test for Mean Vector in High Dimensions
- Global solutions to folded concave penalized nonconvex learning
- Minimum distance Lasso for robust high-dimensional regression
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
- Composite difference-MAX programs for modern statistical estimation problems
- Hard thresholding regularised logistic regression: theory and algorithms
- The finite sample properties of sparse M-estimators with pseudo-observations
- Pathwise coordinate optimization for sparse learning: algorithm and theory
- Rate optimal estimation and confidence intervals for high-dimensional regression with missing covariates
- Robust estimation and shrinkage in ultrahigh dimensional expectile regression with heavy tails and variance heterogeneity
- A diffusion process perspective on posterior contraction rates for parameters
- Distributed testing and estimation under sparse high dimensional models
- scientific article; zbMATH DE number 4176228 (Why is no real title available?)
- Graphical-model based high dimensional generalized linear models
- Robustness and Tractability for Non-convex M-estimators
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- Learning Markov models via low-rank optimization
- Sparse classification: a scalable discrete optimization perspective
- Statistical Inference, Learning and Models in Big Data
- Endogeneity in high dimensions
- Asymptotic Properties of Stationary Solutions of Coupled Nonconvex Nonsmooth Empirical Risk Minimization
- Broken adaptive ridge regression and its asymptotic properties
- I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error
- Penalised robust estimators for sparse and high-dimensional linear models
- A tight bound of hard thresholding
- Accelerated methods for nonconvex optimization
- Bayesian regularization for graphical models with unequal shrinkage
- A high-dimensional M-estimator framework for bi-level variable selection
- Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations
- On high-dimensional Poisson models with measurement error: hypothesis testing for nonlinear nonconvex optimization
- Zero-norm regularized problems: equivalent surrogates, proximal MM method and statistical error bound
- Asymptotic behaviour of penalized robust estimators in logistic regression when dimension increases
- Non-local estimators: a new class of multigrid convergent length estimators
- Rejoinder
- On two recent nonconvex penalties for regularization in machine learning
- Semiparametric efficient estimation in high-dimensional partial linear regression models
- Multiparameter Regularization for Construction of Extrapolating Estimators in Statistical Learning Theory
- Building a telescope to look into high-dimensional image spaces
- Non-concave penalization in linear mixed-effect models and regularized selection of fixed effects
- Sparse precision matrix estimation with missing observations
- Byzantine-robust distributed sparse learning for \(M\)-estimation
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- scientific article; zbMATH DE number 7306869 (Why is no real title available?)
- Bayesian Estimation of Gaussian Conditional Random Fields
- Lower bounds for finding stationary points I
- Minimum average variance estimation with group Lasso for the multivariate response central mean subspace
- A unified primal dual active set algorithm for nonconvex sparse recovery
- Low-rank matrix estimation via nonconvex optimization methods in multi-response errors-in-variables regression
- Adaptive Huber trace regression with low-rank matrix parameter via nonconvex regularization
This page was built for publication: Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5502126)