Complexity of the regularized Newton's method
From MaRDI portal
Publication:5146232
zbMATH Open1474.49059arXiv1706.08483MaRDI QIDQ5146232FDOQ5146232
Publication date: 25 January 2021
Abstract: Newton's method for finding an unconstrained minimizer for strictly convex functions, generally speaking, does not converge from any starting point. We introduce and study the damped regularized Newton's method (DRNM). It converges globally for any strictly convex function, which has a minimizer in . Locally DRNM converges with a quadratic rate. We characterize the neighborhood of the minimizer, where the quadratic rate occurs. Based on it we estimate the number of DRNM's steps required for finding an - approximation for the minimizer.
Full work available at URL: https://arxiv.org/abs/1706.08483
Recommendations
- Regularized Newton method for unconstrained convex optimization
- Convergence properties of the regularized Newton method for the unconstrained nonconvex optimization
- Regularized Newton methods for convex minimization problems with singular solutions
- Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
global convergencequadratic convergence rateregularized Newton's methodNewton's areaNewton's decrement
Cited In (3)
This page was built for publication: Complexity of the regularized Newton's method
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5146232)