The backtrack Hölder gradient method with application to min-max and min-min problems
From MaRDI portal
Publication:6569340
Recommendations
- Two steps at a time-taking GAN training in stride with Tseng's method
- Alternating Proximal-Gradient Steps for (Stochastic) Nonconvex-Concave Minimax Problems
- Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
- An implicit gradient-descent procedure for minimax problems
- Convex-concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization
Cites work
- scientific article; zbMATH DE number 3914081 (Why is no real title available?)
- scientific article; zbMATH DE number 4029737 (Why is no real title available?)
- scientific article; zbMATH DE number 3534286 (Why is no real title available?)
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- A Relationship Between Arbitrary Positive Matrices and Doubly Stochastic Matrices
- An inertial Newton algorithm for deep learning
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Convergence of the Iterates of Descent Methods for Analytic Cost Functions
- Convex analysis and monotone operator theory in Hilbert spaces
- Lagrangian methods for composite optimization
- Mathematical foundations of game theory
- Nonconvex Lagrangian-based optimization: monitoring schemes and global convergence
- On gradients of functions definable in o-minimal structures
- On rings of operators. Reduction theory
- On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients
- On the quality of first-order approximation of functions with Hölder continuous gradient
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Proximal Subgradients, Marginal Values, and Augmented Lagrangians in Nonconvex Optimization
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Universal gradient methods for convex optimization problems
- Variational Analysis
- Zur Theorie der Gesellschaftsspiele.
This page was built for publication: The backtrack Hölder gradient method with application to min-max and min-min problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6569340)