High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks
From MaRDI portal
Publication:5060495
DOI10.1287/opre.2021.2217OpenAlexW3208461153WikidataQ114925239 ScholiaQ114925239MaRDI QIDQ5060495
Yinyu Ye, Hongcheng Liu, Hung-yi Lee
Publication date: 10 January 2023
Published in: Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1903.00616
neural networksparsitysupport vector machinerestricted strong convexityhigh-dimensional learningfolded concave penaltynonsmooth learning
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- Smooth minimization of non-smooth functions
- The Adaptive Lasso and Its Oracle Properties
- On constrained and regularized high-dimensional regression
- The \(L_1\) penalized LAD estimator for high dimensional linear regression
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Statistics for high-dimensional data. Methods, theory and applications.
- One-step sparse estimates in nonconcave penalized likelihood models
- On affine scaling algorithms for nonconvex quadratic programming
- On the complexity of approximating a KKT point of quadratic programming
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- Optimal nonlinear approximation
- On the conditions used to prove oracle results for the Lasso
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- Error bounds for approximations with deep ReLU networks
- Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- Simultaneous analysis of Lasso and Dantzig selector
- Ranking and empirical minimization of \(U\)-statistics
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Calibrating nonconvex penalized regression in ultra-high dimension
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Cubic regularization of Newton method and its global performance
- Strong oracle optimality of folded concave penalized estimation
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Deep vs. shallow networks: An approximation theory perspective
- Lower Bound Theory of Nonzero Entries in Solutions of $\ell_2$-$\ell_p$ Minimization
- Generalization Error in Deep Learning
- Modern statistical estimation via oracle inequalities
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Regression Shrinkage and Selection via The Lasso: A Retrospective
- Gap Safe screening rules for sparsity enforcing penalties
- A Statistical View of Some Chemometrics Regression Tools
- Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- Variable Selection for Support Vector Machines in Moderately High Dimensions
- Convexity, Classification, and Risk Bounds
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- A general theory of concave regularization for high-dimensional sparse estimation problems