Last-iterate convergence: zero-sum games and constrained min-max optimization
From MaRDI portal
Publication:5090401
DOI10.4230/LIPICS.ITCS.2019.27MaRDI QIDQ5090401FDOQ5090401
Authors: Constantinos Daskalakis, Ioannis Panageas
Publication date: 18 July 2022
Full work available at URL: https://arxiv.org/abs/1807.04252
Recommendations
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- Last-iterate convergence of saddle-point optimizers via high-resolution differential equations
- Fast convergence of optimistic gradient ascent in network zero-sum extensive form games
- Alternating Proximal-Gradient Steps for (Stochastic) Nonconvex-Concave Minimax Problems
Cites Work
- Prediction, Learning, and Games
- Smooth minimization of non-smooth functions
- Title not available (Why is that?)
- An analog of the minimax theorem for vector payoffs
- An iterative method of solving a game
- Discrete Dynamical Systems
- Title not available (Why is that?)
- The equivalence of linear programs and zero-sum games
- Mutation, Sexual Reproduction and Survival in Dynamic Environments
Cited In (4)
- Alleviating limit cycling in training GANs with an optimization technique
- Last-iterate convergence of saddle-point optimizers via high-resolution differential equations
- Efficient second-order optimization with predictions in differential games
- A unified stochastic approximation framework for learning in games
This page was built for publication: Last-iterate convergence: zero-sum games and constrained min-max optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5090401)