A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems
From MaRDI portal
Publication:6097769
Abstract: In this paper, we propose a variance-reduced primal-dual algorithm with Bregman distance for solving convex-concave saddle-point problems with finite-sum structure and nonbilinear coupling function. This type of problems typically arises in machine learning and game theory. Based on some standard assumptions, the algorithm is proved to converge with oracle complexity of and using constant and non-constant parameters, respectively where is the number of function components. Compared with existing methods, our framework yields a significant improvement over the number of required primal-dual gradient samples to achieve -accuracy of the primal-dual gap. We tested our method for solving a distributionally robust optimization problem to show the effectiveness of the algorithm.
Recommendations
- A stochastic variance reduction algorithm with Bregman distances for structured composite problems
- Accelerated stochastic algorithms for convex-concave saddle-point problems
- Optimal primal-dual methods for a class of saddle point problems
- A primal-dual algorithm framework for convex saddle-point optimization
- A primal-dual prediction-correction algorithm for saddle point optimization
Cites work
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- A primal-dual algorithm with line search for general convex-concave saddle point problems
- Accelerated schemes for a class of variational inequalities
- Accelerated stochastic algorithms for convex-concave saddle-point problems
- An accelerated HPE-type algorithm for a class of composite convex-concave saddle-point problems
- An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems
- Clustering with Bregman divergences.
- Learning the kernel matrix with semidefinite programming
- Mirror Prox algorithm for multi-term composite minimization and semi-separable problems
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Proximal extrapolated gradient methods for variational inequalities
- Robust Stochastic Approximation Approach to Stochastic Programming
- Smooth minimization of non-smooth functions
- Solving variational inequalities with stochastic mirror-prox algorithm
Cited in
(7)- Accelerated stochastic variance reduction for a class of convex optimization problems
- Cyclic Coordinate Dual Averaging with Extrapolation
- An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems
- Accelerated variance-reduced methods for saddle-point problems
- Non-ergodic convergence rate of an inertial accelerated primal-dual algorithm for saddle point problems
- An inexact primal-dual algorithm for semi-infinite programming
- A stochastic variance reduction algorithm with Bregman distances for structured composite problems
This page was built for publication: A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097769)