On lower iteration complexity bounds for the convex concave saddle point problems
From MaRDI portal
Publication:2149573
Abstract: In this paper, we study the lower iteration complexity bounds for finding the saddle point of a strongly convex and strongly concave saddle point problem: . We restrict the classes of algorithms in our investigation to be either pure first-order methods or methods using proximal mappings. The existing lower bound result for this type of problems is obtained via the framework of strongly monotone variational inequality problems, which corresponds to the case where the gradient Lipschitz constants ( and ) and strong convexity/concavity constants ( and ) are uniform with respect to variables and . However, specific to the min-max saddle point problem these parameters are naturally different. Therefore, one is led to finding the best possible lower iteration complexity bounds, specific to the min-max saddle point models. In this paper we present the following results. For the class of pure first-order algorithms, our lower iteration complexity bound is , where the term explains how the coupling influences the iteration complexity. Under several special parameter regimes, this lower bound has been achieved by corresponding optimal algorithms. However, whether or not the bound under the general parameter regime is optimal remains open. Additionally, for the special case of bilinear coupling problems, given the availability of certain proximal operators, a lower bound of is established in this paper, and optimal algorithms have already been developed in the literature.
Recommendations
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- A primal-dual algorithm with line search for general convex-concave saddle point problems
- Accelerated stochastic algorithms for convex-concave saddle-point problems
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- A simple algorithm for a class of nonsmooth convex-concave saddle-point problems
Cites work
- scientific article; zbMATH DE number 5145289 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 3534286 (Why is no real title available?)
- scientific article; zbMATH DE number 1266748 (Why is no real title available?)
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A globally convergent Newton method for solving strongly monotone variational inequalities
- A note on a globally convergent Newton method for solving monotone variational inequalities
- Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming
- Algorithmic Game Theory
- An accelerated linearized alternating direction method of multipliers
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- DSCOVR: randomized primal-dual block coordinate algorithms for asynchronous distributed optimization
- Dual extrapolation and its applications to solving variational inequalities and related problems
- First-order algorithms for convex optimization with nonseparable objective and coupled constraints
- Information-based complexity of linear operator equations
- Lectures on convex optimization
- Lower bounds for finding stationary points I
- Lower bounds for finding stationary points II: first-order methods
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Oracle complexity of second-order methods for smooth convex optimization
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Solving strongly monotone variational and quasi-variational inequalities
- Solving variational inequalities with stochastic mirror-prox algorithm
Cited in
(15)- No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization
- Robust Accelerated Primal-Dual Methods for Computing Saddle Points
- Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
- Alleviating limit cycling in training GANs with an optimization technique
- Smooth monotone stochastic variational inequalities and saddle point problems: a survey
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- Efficient first order method for saddle point problems with higher order smoothness
- Generalized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracle
- Transformed primal-dual methods for nonlinear saddle point systems
- Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems
- Note on time bounds of two-phase algorithms for \(L\)-convex function minimization
- Accelerated minimax algorithms flock together
- Perseus: a simple and optimal high-order method for variational inequalities
- scientific article; zbMATH DE number 6378119 (Why is no real title available?)
- New first-order algorithms for stochastic variational inequalities
This page was built for publication: On lower iteration complexity bounds for the convex concave saddle point problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2149573)