Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems
From MaRDI portal
Publication:5085148
DOI10.1287/moor.2021.1175zbMath1489.90130arXiv1903.01687OpenAlexW3164209919MaRDI QIDQ5085148
Publication date: 27 June 2022
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1903.01687
stochastic approximationconvex-concave saddle-point problemsprimal-dual first-order methodprimal-dual hybrid gradient framework
Convex programming (90C25) Minimax problems in mathematical programming (90C47) Stochastic programming (90C15)
Related Items
Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method, New Primal-Dual Algorithms for a Class of Nonsmooth and Nonlinear Convex-Concave Minimax Problems, A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems, Accelerated variance-reduced methods for saddle-point problems, Robust Accelerated Primal-Dual Methods for Computing Saddle Points, Randomized Lagrangian stochastic approximation for large-scale constrained stochastic Nash games
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- Smooth minimization of non-smooth functions
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Gradient methods for minimizing composite functions
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- An optimal method for stochastic composite optimization
- On general minimax theorems
- Smoothing technique and its applications in semidefinite optimization
- Subgradient methods for saddle-point problems
- Accelerated schemes for a class of variational inequalities
- Non-Euclidean restricted memory level method for large-scale convex optimization
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A splitting algorithm for dual monotone inclusions involving cocoercive operators
- An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
- The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography
- CVXPY: A Python-Embedded Modeling Language for Convex Optimization
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Convex Optimization in Normed Spaces
- Convergence Rate Analysis of Primal-Dual Splitting Schemes
- An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
- Robust Stochastic Approximation Approach to Stochastic Programming
- An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex–concave saddle-point problems
- High-Dimensional Probability
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- Information-Based Complexity, Feedback and Dynamics in Convex Programming
- Regularization and Variable Selection Via the Elastic Net
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Elements of Information Theory
- A Stochastic Approximation Method