A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems
From MaRDI portal
Publication:6097769
DOI10.1007/S10589-023-00472-5zbMATH Open1519.90258arXiv2012.13456OpenAlexW3117305116MaRDI QIDQ6097769FDOQ6097769
Authors: Erfan Yazdandoost Hamedani, Afrooz Jalilzadeh
Publication date: 7 June 2023
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Abstract: In this paper, we propose a variance-reduced primal-dual algorithm with Bregman distance for solving convex-concave saddle-point problems with finite-sum structure and nonbilinear coupling function. This type of problems typically arises in machine learning and game theory. Based on some standard assumptions, the algorithm is proved to converge with oracle complexity of and using constant and non-constant parameters, respectively where is the number of function components. Compared with existing methods, our framework yields a significant improvement over the number of required primal-dual gradient samples to achieve -accuracy of the primal-dual gap. We tested our method for solving a distributionally robust optimization problem to show the effectiveness of the algorithm.
Full work available at URL: https://arxiv.org/abs/2012.13456
Recommendations
- A stochastic variance reduction algorithm with Bregman distances for structured composite problems
- Accelerated stochastic algorithms for convex-concave saddle-point problems
- Optimal primal-dual methods for a class of saddle point problems
- A primal-dual algorithm framework for convex saddle-point optimization
- A primal-dual prediction-correction algorithm for saddle point optimization
Cites Work
- Title not available (Why is that?)
- Smooth minimization of non-smooth functions
- Clustering with Bregman divergences.
- Robust Stochastic Approximation Approach to Stochastic Programming
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Learning the kernel matrix with semidefinite programming
- Solving variational inequalities with stochastic mirror-prox algorithm
- Mirror Prox algorithm for multi-term composite minimization and semi-separable problems
- On the ergodic convergence rates of a first-order primal-dual algorithm
- An accelerated HPE-type algorithm for a class of composite convex-concave saddle-point problems
- An accelerated non-Euclidean hybrid proximal extragradient-type algorithm for convex-concave saddle-point problems
- Proximal extrapolated gradient methods for variational inequalities
- Accelerated schemes for a class of variational inequalities
- Accelerated stochastic algorithms for convex-concave saddle-point problems
- A primal-dual algorithm with line search for general convex-concave saddle point problems
Cited In (6)
- Accelerated stochastic variance reduction for a class of convex optimization problems
- Cyclic Coordinate Dual Averaging with Extrapolation
- An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems
- Accelerated variance-reduced methods for saddle-point problems
- Non-ergodic convergence rate of an inertial accelerated primal-dual algorithm for saddle point problems
- A stochastic variance reduction algorithm with Bregman distances for structured composite problems
This page was built for publication: A stochastic variance-reduced accelerated primal-dual method for finite-sum saddle-point problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097769)