Cubic regularized Newton method for the saddle point models: a global and local convergence analysis
DOI10.1007/S10915-022-01819-6zbMATH Open1489.90115arXiv2008.09919OpenAlexW3080256704MaRDI QIDQ2148117FDOQ2148117
Authors: Kevin X. D. Huang, Junyu Zhang, Shuzhong Zhang
Publication date: 21 June 2022
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2008.09919
Recommendations
- Cubic regularization of Newton method and its global performance
- On the convergence of a modified regularized Newton method for convex optimization with singular solutions
- Minimizing uniformly convex functions by cubic regularization of Newton method
- Accelerating the cubic regularization of Newton's method on convex problems
- Regularized Newton methods for convex minimization problems with singular solutions
Numerical mathematical programming methods (65K05) Convex programming (90C25) Minimax problems in mathematical programming (90C47) Methods of quasi-Newton type (90C53)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Algorithmic Game Theory
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Generalized inverses. Theory and applications.
- Robust optimization
- Monotone Operators and the Proximal Point Algorithm
- Title not available (Why is that?)
- Title not available (Why is that?)
- Solving strongly monotone variational and quasi-variational inequalities
- A mathematical view of interior-point methods in convex optimization
- Title not available (Why is that?)
- Dual extrapolation and its applications to solving variational inequalities and related problems
- On linear convergence of iterative methods for the variational inequality problem
- Cubic regularization of Newton method and its global performance
- Accelerating the cubic regularization of Newton's method on convex problems
- A globally convergent Newton method for solving strongly monotone variational inequalities
- Inverses of \(2\times 2\) block matrices
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
Cited In (7)
- Higher-order methods for convex-concave min-max optimization and monotone variational inequalities
- Topologie locale des méthodes de Newton cubiques: plan paramétrique
- Minimizing uniformly convex functions by cubic regularization of Newton method
- Efficient first order method for saddle point problems with higher order smoothness
- Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems
- Perseus: a simple and optimal high-order method for variational inequalities
- An implicit gradient-descent procedure for minimax problems
Uses Software
This page was built for publication: Cubic regularized Newton method for the saddle point models: a global and local convergence analysis
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2148117)