First-order algorithm with O((1/)) convergence for -equilibrium in two-person zero-sum games
DOI10.1007/S10107-010-0430-2zbMATH Open1243.91004OpenAlexW2072651894MaRDI QIDQ431003FDOQ431003
Authors: Andrew Gilpin, Javier Peña, Tuomas Sandholm
Publication date: 26 June 2012
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-010-0430-2
Recommendations
- Efficient search of first-order Nash equilibria in nonconvex-concave smooth min-max problems
- Smoothing techniques for computing Nash equilibria of sequential games
- Faster algorithms for extensive-form game solving via improved smoothing functions
- Near-optimal no-regret algorithms for zero-sum games
- scientific article; zbMATH DE number 18885
Minimax problems in mathematical programming (90C47) 2-person games (91A05) Complementarity and equilibrium problems and variational inequalities (finite dimensions) (aspects of mathematical programming) (90C33) Methods of reduced gradient type (90C52)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Smooth minimization of non-smooth functions
- Excessive Gap Technique in Nonsmooth Convex Minimization
- On convergence rates of subgradient optimization methods
- A course in game theory.
- An O(√nL)-Iteration Homogeneous and Self-Dual Linear Programming Algorithm
- The complexity of two-person zero-sum games in extensive form
- Potential function methods for approximately solving linear programming problems: theory and practice.
- Efficient computation of behavior strategies
- Applying metric regularity to compute a condition measure of a smoothing algorithm for matrix games
- Smoothing techniques for computing Nash equilibria of sequential games
- Title not available (Why is that?)
- Title not available (Why is that?)
Cited In (19)
- Subgradient methods for huge-scale optimization problems
- Efficient search of first-order Nash equilibria in nonconvex-concave smooth min-max problems
- On incremental approximate saddle-point computation in zero-sum matrix games
- Forward-partial inverse-forward splitting for solving monotone inclusions
- Faster first-order primal-dual methods for linear programming using restarts and sharpness
- Sharpness, restart, and acceleration
- A fast dual proximal-gradient method for separable convex optimization with linear coupled constraints
- Algorithm for computing approximate Nash equilibrium in continuous games with application to continuous blotto
- Infeasibility Detection with Primal-Dual Hybrid Gradient for Large-Scale Linear Programming
- A deterministic rescaled perceptron algorithm
- Towards a deeper geometric, analytic and algorithmic understanding of margins
- Applying metric regularity to compute a condition measure of a smoothing algorithm for matrix games
- Smoothing techniques for computing Nash equilibria of sequential games
- A simple nearly optimal restart scheme for speeding up first-order methods
- Faster algorithms for extensive-form game solving via improved smoothing functions
- ``Efficient subgradient methods for general convex optimization
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Faster subgradient methods for functions with Hölderian growth
- New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure
This page was built for publication: First-order algorithm with \({\mathcal{O}(\ln(1/\epsilon))}\) convergence for \({\epsilon}\)-equilibrium in two-person zero-sum games
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q431003)