First-order convergence theory for weakly-convex-weakly-concave min-max problems
From MaRDI portal
Publication:5159451
Recommendations
- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
- Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
- Alternating Proximal-Gradient Steps for (Stochastic) Nonconvex-Concave Minimax Problems
- Two steps at a time-taking GAN training in stride with Tseng's method
- The landscape of the proximal point method for nonconvex-nonconcave minimax optimization
Cites work
- scientific article; zbMATH DE number 3534286 (Why is no real title available?)
- scientific article; zbMATH DE number 2159409 (Why is no real title available?)
- A New Projection Method for Variational Inequality Problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- An efficient primal dual prox method for non-smooth optimization
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications
- Monotone (nonlinear) operators in Hilbert space
- Monotone Operators and the Proximal Point Algorithm
- Most tensor problems are NP-hard
- On some non-linear elliptic differential functional equations
- On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Robust Stochastic Approximation Approach to Stochastic Programming
- Robust linear least squares regression
- Saddle-point dynamics: conditions for asymptotic stability of saddle points
- Solving strongly monotone variational and quasi-variational inequalities
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- The proximal point method for nonmonotone variational inequalities
- Unified framework of extragradient-type methods for pseudomonotone variational inequalities.
Cited in
(6)- First-order Convergence Theory for Weakly-Convex-Weakly-Concave Min-max Problems
- Alternating Proximal-Gradient Steps for (Stochastic) Nonconvex-Concave Minimax Problems
- Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
- Decentralized Gradient Descent Maximization Method for Composite Nonconvex Strongly-Concave Minimax Problems
- A quasi-Newton subspace trust region algorithm for nonmonotone variational inequalities in adversarial learning over box constraints
- Perseus: a simple and optimal high-order method for variational inequalities
This page was built for publication: First-order convergence theory for weakly-convex-weakly-concave min-max problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5159451)