A quasi-Newton subspace trust region algorithm for nonmonotone variational inequalities in adversarial learning over box constraints
From MaRDI portal
Publication:6629223
Recommendations
- A globally and superlinearly convergent quasi-Newton method for general box constrained variational inequalities without smoothing approximation
- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
- Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
- First-order convergence theory for weakly-convex-weakly-concave min-max problems
- A \(J\)-symmetric quasi-Newton method for minimax problems
Cites work
- scientific article; zbMATH DE number 3928227 (Why is no real title available?)
- scientific article; zbMATH DE number 49749 (Why is no real title available?)
- scientific article; zbMATH DE number 2121076 (Why is no real title available?)
- scientific article; zbMATH DE number 7415112 (Why is no real title available?)
- Approximate Gauss–Newton Methods for Nonlinear Least Squares Problems
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Computer vision. Algorithms and applications
- Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations
- Dual extrapolation and its applications to solving variational inequalities and related problems
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities
- Global convergence of a new hybrid Gauss-Newton structured BFGS method for nonlinear least squares problems
- Lectures on stochastic programming. Modeling and theory
- Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
- Optimization and nonsmooth analysis
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Recent advances in trust region algorithms
- Superlinear convergence of smoothing quasi-Newton methods for nonsmooth equations
- Trust Region Methods
- Uniform exponential convergence of sample average random functions under general sampling with applications in stochastic programming
- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
This page was built for publication: A quasi-Newton subspace trust region algorithm for nonmonotone variational inequalities in adversarial learning over box constraints
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6629223)