A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
From MaRDI portal
Publication:2031928
DOI10.1007/s10957-020-01799-3zbMath1467.90029OpenAlexW3118587776MaRDI QIDQ2031928
Publication date: 15 June 2021
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10957-020-01799-3
Related Items (4)
A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems ⋮ Variable sample-size operator extrapolation algorithm for stochastic mixed variational inequalities ⋮ Bregman proximal point type algorithms for quasiconvex minimization ⋮ A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Handbook of simulation optimization
- An extragradient-based alternating direction method for convex minimization
- A simplified view of first order methods for optimization
- Extragradient method in optimization: convergence and complexity
- On stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities: randomized block coordinate and optimal averaging schemes
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Some recent advances in projection-type methods for variational inequalities
- Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants
- Generalized uniformly optimal methods for nonlinear programming
- Feature Article: Optimization for simulation: Theory vs. Practice
- Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization
- Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
- Introduction to Stochastic Programming
- First Order Methods Beyond Convexity and Lipschitz Gradient Continuity with Applications to Quadratic Inverse Problems
- Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities
- Optimization Methods for Large-Scale Machine Learning
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes
- Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
- Convergence Rates for Deterministic and Stochastic Subgradient Methods without Lipschitz Continuity
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- On perturbed proximal gradient algorithms
- Understanding Machine Learning
- Convex Analysis
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
- A Stochastic Approximation Method
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
This page was built for publication: A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods