On lower iteration complexity bounds for the convex concave saddle point problems
From MaRDI portal
Publication:2149573
DOI10.1007/S10107-021-01660-ZzbMATH Open1494.90127arXiv1912.07481OpenAlexW3166764897MaRDI QIDQ2149573FDOQ2149573
Shuzhong Zhang, Mingyi Hong, Junyu Zhang
Publication date: 29 June 2022
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Abstract: In this paper, we study the lower iteration complexity bounds for finding the saddle point of a strongly convex and strongly concave saddle point problem: . We restrict the classes of algorithms in our investigation to be either pure first-order methods or methods using proximal mappings. The existing lower bound result for this type of problems is obtained via the framework of strongly monotone variational inequality problems, which corresponds to the case where the gradient Lipschitz constants ( and ) and strong convexity/concavity constants ( and ) are uniform with respect to variables and . However, specific to the min-max saddle point problem these parameters are naturally different. Therefore, one is led to finding the best possible lower iteration complexity bounds, specific to the min-max saddle point models. In this paper we present the following results. For the class of pure first-order algorithms, our lower iteration complexity bound is , where the term explains how the coupling influences the iteration complexity. Under several special parameter regimes, this lower bound has been achieved by corresponding optimal algorithms. However, whether or not the bound under the general parameter regime is optimal remains open. Additionally, for the special case of bilinear coupling problems, given the availability of certain proximal operators, a lower bound of is established in this paper, and optimal algorithms have already been developed in the literature.
Full work available at URL: https://arxiv.org/abs/1912.07481
Cites Work
- DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization
- Algorithmic Game Theory
- Title not available (Why is that?)
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Title not available (Why is that?)
- Solving strongly monotone variational and quasi-variational inequalities
- Title not available (Why is that?)
- Dual extrapolation and its applications to solving variational inequalities and related problems
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Title not available (Why is that?)
- On the ergodic convergence rates of a first-order primal-dual algorithm
- A globally convergent Newton method for solving strongly monotone variational inequalities
- Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming
- Information-based complexity of linear operator equations
- Oracle complexity of second-order methods for smooth convex optimization
- An Accelerated Linearized Alternating Direction Method of Multipliers
- A note on a globally convergent Newton method for solving monotone variational inequalities
- Lectures on convex optimization
- First-order algorithms for convex optimization with nonseparable objective and coupled constraints
- Lower bounds for finding stationary points I
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- Convergence Rate of $\mathcal{O}(1/k)$ for Optimistic Gradient and Extragradient Methods in Smooth Convex-Concave Saddle Point Problems
- Lower bounds for finding stationary points II: first-order methods
Cited In (13)
- Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
- Alleviating limit cycling in training GANs with an optimization technique
- Smooth monotone stochastic variational inequalities and saddle point problems: a survey
- New First-Order Algorithms for Stochastic Variational Inequalities
- Generalized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracle
- Transformed primal-dual methods for nonlinear saddle point systems
- Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems
- Note on time bounds of two-phase algorithms for \(L\)-convex function minimization
- Accelerated minimax algorithms flock together
- Perseus: a simple and optimal high-order method for variational inequalities
- Title not available (Why is that?)
- No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization
- Robust Accelerated Primal-Dual Methods for Computing Saddle Points
Uses Software
This page was built for publication: On lower iteration complexity bounds for the convex concave saddle point problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2149573)