Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls
From MaRDI portal
Publication:830557
DOI10.1016/j.csda.2020.107047OpenAlexW3041493066MaRDI QIDQ830557
Jen-Chih Yao, Chong Li, Dongya Wu, Xin Li, Jin-Hua Wang
Publication date: 7 May 2021
Published in: Computational Statistics and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1911.08061
convergence ratesparse recoveryproximal gradient methodstatistical consistencyrecovery boundnonconvex regularized \(M\)-estimators
Related Items (2)
Sparse estimation in high-dimensional linear errors-in-variables regression via a covariate relaxation method ⋮ Low-rank matrix estimation via nonconvex optimization methods in multi-response errors-in-variables regression
Cites Work
- Unnamed Item
- Unnamed Item
- Measurement error in Lasso: impact and likelihood bias correction
- Nearly unbiased variable selection under minimax concave penalty
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- One-step sparse estimates in nonconcave penalized likelihood models
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Combining different procedures for adaptive regression
- Bayesian neural networks with confidence estimations applied to data mining.
- An efficient algorithm for sparse inverse covariance matrix estimation based on dual formulation
- Optimal adaptive estimation of linear functionals under sparsity
- Simultaneous analysis of Lasso and Dantzig selector
- Variable selection using MM algorithms
- Adaptive Minimax Estimation over Sparse $\ell_q$-Hulls
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Regularization and Variable Selection Via the Elastic Net
- Measurement Error in Nonlinear Models
- Stable signal recovery from incomplete and inaccurate measurements
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- A general theory of concave regularization for high-dimensional sparse estimation problems
- Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation
This page was built for publication: Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls