A variational approach to stochastic minimization of convex functionals
From MaRDI portal
Publication:5146229
zbMATH Open1474.90326arXiv1605.03289MaRDI QIDQ5146229FDOQ5146229
Authors: Miroslav Bačák
Publication date: 25 January 2021
Abstract: Stochastic methods for minimizing a convex integral functional, as initiated by Robbins and Monro in the early 1950s, rely on the evaluation of a gradient (or subgradient if the function is not smooth) and moving in the corresponding direction. In contrast, we use a variational technique resulting in an implicit stochastic minimization method, which has recently appeared in several diverse contexts. Such an approach is desirable whenever the underlying space does not have a differentiable structure and moreover it exhibits better stability properties which makes it preferable even in linear spaces. Our results are formulated in locally compact Hadamard spaces, but they are new even in Euclidean space, the main novelty being more general growth conditions on the functional. We verify that the assumptions of our convergence theorem are satisfied in a few classical minimization problems.
Full work available at URL: https://arxiv.org/abs/1605.03289
Recommendations
- Minimization of stochastic functionals and random variational inequalities
- Some methods of stochastic programming in Hilbert space
- scientific article; zbMATH DE number 3862933
- scientific article; zbMATH DE number 3887445
- Projected stochastic gradients for convex constrained problems in Hilbert spaces
convex optimizationstochastic approximationiterative methodproximal point algorithmvariational analysis
Convex programming (90C25) Stochastic programming (90C15) Set-valued and variational analysis (49J53)
Cited In (9)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Title not available (Why is that?)
- Descent methods with computational errors in Banach spaces
- Old and new challenges in Hadamard spaces
- Analytic approach to variance optimization under an \(\mathcal{l}_1\) constraint
- Stability of the asymptotic behavior for continuous descent methods with a convex objective function
- Random gradient-free minimization of convex functions
- Vaidya's method for convex stochastic optimization problems in small dimension
- Two iterative processes generated by regular vector fields in Banach spaces
This page was built for publication: A variational approach to stochastic minimization of convex functionals
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5146229)