Cubic regularized Newton method for the saddle point models: a global and local convergence analysis
From MaRDI portal
Publication:2148117
Abstract: In this paper, we propose a cubic regularized Newton (CRN) method for solving convex-concave saddle point problems (SPP). At each iteration, a cubic regularized saddle point subproblem is constructed and solved, which provides a search direction for the iterate. With properly chosen stepsizes, the method is shown to converge to the saddle point with global linear and local superlinear convergence rates, if the saddle point function is gradient Lipschitz and strongly-convex-strongly-concave. In the case that the function is merely convex-concave, we propose a homotopy continuation (or path-following) method. Under a Lipschitz-type error bound condition, we present an iteration complexity bound of to reach an -solution through a homotopy continuation approach, and the iteration complexity bound becomes under a H"{o}lderian-type error bound condition involving a parameter ().
Recommendations
- Cubic regularization of Newton method and its global performance
- On the convergence of a modified regularized Newton method for convex optimization with singular solutions
- Minimizing uniformly convex functions by cubic regularization of Newton method
- Accelerating the cubic regularization of Newton's method on convex problems
- Regularized Newton methods for convex minimization problems with singular solutions
Cites work
- scientific article; zbMATH DE number 3534286 (Why is no real title available?)
- scientific article; zbMATH DE number 1241609 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- scientific article; zbMATH DE number 3326230 (Why is no real title available?)
- scientific article; zbMATH DE number 964349 (Why is no real title available?)
- A globally convergent Newton method for solving strongly monotone variational inequalities
- A mathematical view of interior-point methods in convex optimization
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Accelerating the cubic regularization of Newton's method on convex problems
- Algorithmic Game Theory
- Convergence rate of \(\mathcal{O}(1/k)\) for optimistic gradient and extragradient methods in smooth convex-concave saddle point problems
- Cubic regularization of Newton method and its global performance
- Dual extrapolation and its applications to solving variational inequalities and related problems
- Generalized inverses. Theory and applications.
- Inverses of 2 2 block matrices
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- Monotone Operators and the Proximal Point Algorithm
- On linear convergence of iterative methods for the variational inequality problem
- On the quadratic convergence of the cubic regularization method under a local error bound condition
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Robust optimization
- Solving strongly monotone variational and quasi-variational inequalities
Cited in
(7)- Higher-order methods for convex-concave min-max optimization and monotone variational inequalities
- Topologie locale des méthodes de Newton cubiques: plan paramétrique
- Minimizing uniformly convex functions by cubic regularization of Newton method
- Efficient first order method for saddle point problems with higher order smoothness
- Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems
- Perseus: a simple and optimal high-order method for variational inequalities
- An implicit gradient-descent procedure for minimax problems
This page was built for publication: Cubic regularized Newton method for the saddle point models: a global and local convergence analysis
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2148117)