Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
From MaRDI portal
Publication:2330643
Recommendations
- Regularized sample average approximation for high-dimensional stochastic optimization under low-rankness
- Robust Stochastic Approximation Approach to Stochastic Programming
- Sample average approximation of expected value constrained stochastic programs
- On feasibility of sample average approximation solutions
- Sample average approximations of strongly convex stochastic programs in Hilbert spaces
Cites work
- A general theory of concave regularization for high-dimensional sparse estimation problems
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems
- Calibrating nonconvex penalized regression in ultra-high dimension
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Complexity of unconstrained \(L_2 - L_p\) minimization
- Convexity, Classification, and Risk Bounds
- Cubic regularization of Newton method and its global performance
- Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions
- Global solutions to folded concave penalized nonconvex learning
- Lectures on Stochastic Programming
- Nearly unbiased variable selection under minimax concave penalty
- Nonconcave Penalized Likelihood With NP-Dimensionality
- On affine scaling algorithms for nonconvex quadratic programming
- On the complexity of approximating a KKT point of quadratic programming
- Optimal computational and statistical rates of convergence for sparse nonconvex learning problems
- Regularized \(M\)-estimators with nonconvexity: statistical and algorithmic theory for local optima
- Simultaneous analysis of Lasso and Dantzig selector
- Stochastic mathematical programs with equilibrium constraints, modelling and sample average approximation
- Strong oracle optimality of folded concave penalized estimation
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The sample average approximation method for stochastic discrete optimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited in
(8)- High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks
- Regularized sample average approximation for high-dimensional stochastic optimization under low-rankness
- Sample complexity of sample average approximation for conditional stochastic optimization
- SPAR: Stochastic Programming with Adversarial Recourse
- Diametrical risk minimization: theory and computations
- Bilevel cutting-plane algorithm for cardinality-constrained mean-CVaR portfolio optimization
- Consistency bounds and support recovery of d-stationary solutions of sparse sample average approximations
- General feasibility bounds for sample average approximation via Vapnik-Chervonenkis dimension
This page was built for publication: Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2330643)