Publication:2330643: Difference between revisions
From MaRDI portal
Publication:2330643
Created automatically from import240129110113 |
(No difference)
|
Latest revision as of 16:28, 2 February 2024
DOI10.1007/s10107-018-1278-0zbMath1423.90162OpenAlexW2801615849WikidataQ91080626 ScholiaQ91080626MaRDI QIDQ2330643
Hongcheng Liu, Xue Wang, Tao Yao, Yinyu Ye, Run-Ze Li
Publication date: 22 October 2019
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6824431
stochastic programmingsample average approximationfolded concave penaltysecond order necessary condition
Ridge regression; shrinkage estimators (Lasso) (62J07) Monte Carlo methods (65C05) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15)
Related Items
High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, General Feasibility Bounds for Sample Average Approximation via Vapnik--Chervonenkis Dimension, Diametrical risk minimization: theory and computations, Regularized sample average approximation for high-dimensional stochastic optimization under low-rankness, Sample Complexity of Sample Average Approximation for Conditional Stochastic Optimization, Bilevel cutting-plane algorithm for cardinality-constrained mean-CVaR portfolio optimization
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- Global solutions to folded concave penalized nonconvex learning
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems
- On affine scaling algorithms for nonconvex quadratic programming
- On the complexity of approximating a KKT point of quadratic programming
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- Simultaneous analysis of Lasso and Dantzig selector
- Complexity of unconstrained \(L_2 - L_p\) minimization
- Calibrating nonconvex penalized regression in ultra-high dimension
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Cubic regularization of Newton method and its global performance
- Strong oracle optimality of folded concave penalized estimation
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- The Sample Average Approximation Method for Stochastic Discrete Optimization
- Stochastic mathematical programs with equilibrium constraints, modelling and sample average approximation
- Lectures on Stochastic Programming
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Nonconcave Penalized Likelihood With NP-Dimensionality
- Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
- Convexity, Classification, and Risk Bounds
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- A general theory of concave regularization for high-dimensional sparse estimation problems