A quasi-Newton subspace trust region algorithm for nonmonotone variational inequalities in adversarial learning over box constraints
DOI10.1007/S10915-024-02679-YMaRDI QIDQ6629223FDOQ6629223
Authors: Zicheng Qiu, Jie Jiang, Xiaojun Chen
Publication date: 29 October 2024
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Recommendations
- A globally and superlinearly convergent quasi-Newton method for general box constrained variational inequalities without smoothing approximation
- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
- Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
- First-order convergence theory for weakly-convex-weakly-concave min-max problems
- A \(J\)-symmetric quasi-Newton method for minimax problems
least squares problemquasi-Newton methodmin-max optimizationgenerative adversarial networksnonmonotone variational inequality
Stochastic programming (90C15) Minimax problems in mathematical programming (90C47) Complementarity and equilibrium problems and variational inequalities (finite dimensions) (aspects of mathematical programming) (90C33) Numerical methods for variational inequalities and related problems (65K15)
Cites Work
- Title not available (Why is that?)
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- Optimization and nonsmooth analysis
- Trust Region Methods
- Title not available (Why is that?)
- Title not available (Why is that?)
- Dual extrapolation and its applications to solving variational inequalities and related problems
- Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities
- Recent advances in trust region algorithms
- Global convergence of a new hybrid Gauss-Newton structured BFGS method for nonlinear least squares problems
- Approximate Gauss–Newton Methods for Nonlinear Least Squares Problems
- Uniform exponential convergence of sample average random functions under general sampling with applications in stochastic programming
- Computer vision. Algorithms and applications
- Superlinear convergence of smoothing quasi-Newton methods for nonsmooth equations
- Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations
- Lectures on stochastic programming. Modeling and theory
- Weakly-convex-concave min-max optimization: provable algorithms and applications in machine learning
- Title not available (Why is that?)
- Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks
This page was built for publication: A quasi-Newton subspace trust region algorithm for nonmonotone variational inequalities in adversarial learning over box constraints
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6629223)