Convergence properties of the regularized Newton method for the unconstrained nonconvex optimization
From MaRDI portal
Publication:708864
DOI10.1007/s00245-009-9094-9zbMath1228.90087OpenAlexW2089765129MaRDI QIDQ708864
Publication date: 15 October 2010
Published in: Applied Mathematics and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00245-009-9094-9
global convergencesuperlinear convergencelocal error boundglobal complexity boundregularized Newton methods
Related Items (19)
A regularized limited memory BFGS method for large-scale unconstrained optimization and its efficient implementations ⋮ A regularized Newton method for degenerate unconstrained optimization problems ⋮ On a global complexity bound of the Levenberg-marquardt method ⋮ A fast and simple modification of Newton's method avoiding saddle points ⋮ Regularization of limited memory quasi-Newton methods for large-scale nonconvex minimization ⋮ The Levenberg-Marquardt-type methods for a kind of vertical complementarity problem ⋮ A Regularized Newton Method for \({\boldsymbol{\ell}}_{q}\) -Norm Composite Optimization Problems ⋮ Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem ⋮ A regularized limited memory subspace minimization conjugate gradient method for unconstrained optimization ⋮ A new method with sufficient descent property for unconstrained optimization ⋮ An inexact proximal regularization method for unconstrained optimization ⋮ Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization ⋮ Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization ⋮ Local convergence analysis of a primal-dual method for bound-constrained optimization without SOSC ⋮ A new regularized quasi-Newton method for unconstrained optimization ⋮ A regularized Newton method without line search for unconstrained optimization ⋮ Two nonmonotone trust region algorithms based on an improved Newton method ⋮ Global complexity bound of the Levenberg–Marquardt method ⋮ Correction of trust region method with a new modified Newton method
Cites Work
- Unnamed Item
- Truncated regularized Newton method for convex minimizations
- Regularized Newton method for unconstrained convex optimization
- Regularized Newton methods for convex minimization problems with singular solutions
- Matrix Analysis
- Numerical Optimization
- Convergence Properties of the Inexact Levenberg-Marquardt Method under Local Error Bound Conditions
This page was built for publication: Convergence properties of the regularized Newton method for the unconstrained nonconvex optimization