Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
From MaRDI portal
Publication:6136233
Abstract: This paper considers a class of nonsmooth nonconvex-nonconcave min-max problems in machine learning and games. We first provide sufficient conditions for the existence of global minimax points and local minimax points. Next, we establish the first-order and second-order optimality conditions for local minimax points by using directional derivatives. These conditions reduce to smooth min-max problems with Fr{'e}chet derivatives. We apply our theoretical results to generative adversarial networks (GANs) in which two neural networks contest with each other in a game. Examples are used to illustrate applications of the new theory for training GANs.
Recommendations
- Alternating Proximal-Gradient Steps for (Stochastic) Nonconvex-Concave Minimax Problems
- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
- The landscape of the proximal point method for nonconvex-nonconcave minimax optimization
- Two steps at a time-taking GAN training in stride with Tseng's method
- Efficient search of first-order Nash equilibria in nonconvex-concave smooth min-max problems
Cites work
- scientific article; zbMATH DE number 46303 (Why is no real title available?)
- scientific article; zbMATH DE number 7415112 (Why is no real title available?)
- A Generalized Second-Order Derivative in Nonsmooth Optimization
- A study of piecewise linear-quadratic programs
- Bilevel optimization. Advances and next challenges
- Convergence analysis of sample average approximation of two-stage stochastic generalized equations
- Exact penalization of generalized Nash equilibrium problems
- Generalized Nash equilibrium problems
- Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications
- Lectures on stochastic programming. Modeling and theory.
- On gradient-based learning in continuous games
- On lower iteration complexity bounds for the convex concave saddle point problems
- Optimal primal-dual methods for a class of saddle point problems
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Pure characteristics demand models and distributionally robust mathematical programs with stochastic complementarity constraints
- Variational Analysis
- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
Cited in
(8)- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
- Newton and interior-point methods for (constrained) nonconvex-nonconcave minmax optimization with stability and instability guarantees
- Alternating Proximal-Gradient Steps for (Stochastic) Nonconvex-Concave Minimax Problems
- Data-driven distributionally robust multiproduct pricing problems under pure characteristics demand models
- Distributionally robust variational inequalities: relaxation, quantification and discretization
- Two steps at a time-taking GAN training in stride with Tseng's method
- The backtrack Hölder gradient method with application to min-max and min-min problems
- A quasi-Newton subspace trust region algorithm for nonmonotone variational inequalities in adversarial learning over box constraints
This page was built for publication: Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6136233)