Graphical Convergence of Subgradients in Nonconvex Optimization and Learning
From MaRDI portal
Publication:5076697
DOI10.1287/moor.2021.1126zbMath1492.90101arXiv1810.07590OpenAlexW3178352024MaRDI QIDQ5076697
Damek Davis, Dmitriy Drusvyatskiy
Publication date: 17 May 2022
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1810.07590
stabilitysubdifferentialMoreau envelopesample average approximationgraphical convergenceweak convexitypopulation risk
Computational learning theory (68Q32) Nonconvex programming, global optimization (90C26) Numerical optimization and variational techniques (65K10) Stochastic programming (90C15)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Uniform laws of large numbers for set-valued mappings and subdifferentials of random functions
- Uniform exponential convergence of sample average random functions under general sampling with applications in stochastic programming
- Smooth sample average approximation of stationary points in nonsmooth stochastic optimization and applications
- Subgradient methods for sharp weakly convex functions
- On the asymptotics of constrained local \(M\)-estimators.
- On the asymptotics of constrained \(M\)-estimation
- Probabilistic bounds (via large deviations) for the solutions of stochastic programming problems
- The landscape of empirical risk for nonconvex losses
- Stochastic subgradient method converges on tame functions
- Efficiency of minimizing compositions of convex functions and smooth maps
- Asymptotic and finite-sample properties of estimators based on stochastic gradients
- Local Rademacher complexities
- On the Rate of Convergence of Optimal Solutions of Monte Carlo Approximations of Stochastic Programs
- Concentration Inequalities
- A Vector-Contraction Inequality for Rademacher Complexities
- Convergence of Stationary Points of Sample Average Two-Stage Stochastic Programs: A Generalized Equation Approach
- Stability of $\varepsilon$-approximate Solutions to Convex Stochastic Programs
- Robust Stochastic Approximation Approach to Stochastic Programming
- Optimization and nonsmooth analysis
- Variational Analysis
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- Variational Analysis and Applications
- Stochastic Model-Based Minimization of Weakly Convex Functions
- High-Dimensional Probability
- Variational Analysis and Generalized Differentiation I
- Asymptotic Theory for Solutions in Statistical Estimation and Stochastic Programming
- Analysis of Sample-Path Optimization
- 10.1162/153244302760200704
- 10.1162/153244303321897690
- Prox-regular functions in variational analysis
- Solving (most) of a set of quadratic equalities: composite optimization for robust phase retrieval
- Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems
- Understanding Machine Learning
- Proximité et dualité dans un espace hilbertien
- Quantitative Stability in Stochastic Programming: The Method of Probability Metrics
- STABILITY RESULTS IN LEARNING THEORY
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
This page was built for publication: Graphical Convergence of Subgradients in Nonconvex Optimization and Learning