Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence
From MaRDI portal
Publication:6116237
DOI10.1137/22m1488752zbMath1520.65041arXiv2112.02089OpenAlexW4384298888MaRDI QIDQ6116237
Publication date: 11 August 2023
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2112.02089
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Simple examples for the failure of Newton's method with line search for strictly convex minimization
- Gradient methods for minimizing composite functions
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Lectures on convex optimization
- A regularized Newton method without line search for unconstrained optimization
- Gradient methods of maximization
- Accelerating the cubic regularization of Newton's method on convex problems
- Regularized Newton method for unconstrained convex optimization
- Introductory lectures on convex optimization. A basic course.
- On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption
- Regularized Newton methods for convex minimization problems with singular solutions
- Curiosities and counterexamples in smooth convex optimization
- Convergence and complexity analysis of a Levenberg-Marquardt algorithm for inverse problems
- Implementable tensor methods in unconstrained convex optimization
- Levenberg-Marquardt method for solving systems of absolute value equations
- Oracle complexity of second-order methods for smooth convex optimization
- On the divergence of line search methods
- Cubic regularization of Newton method and its global performance
- Minimization of functions having Lipschitz continuous first partial derivatives
- Superfast second-order methods for unconstrained convex optimization
- Inexact accelerated high-order proximal-point methods
- Majorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squares
- A dynamic approach to a proximal-Newton method for monotone inclusions in Hilbert spaces, with complexity O(1/n^2)
- On the Evaluation Complexity of Cubic Regularization Methods for Potentially Rank-Deficient Nonlinear Least-Squares Problems and Its Relevance to Constrained Nonlinear Optimization
- A Continuous Dynamical Newton-Like Approach to Solving Monotone Inclusions
- Large-Scale Machine Learning with Stochastic Gradient Descent
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- Trust Region Methods
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Optimization Methods for Large-Scale Machine Learning
- Convergence Properties of the Inexact Levenberg-Marquardt Method under Local Error Bound Conditions
- A Nonmonotone Line Search Technique for Newton’s Method
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- Modified Gauss–Newton scheme with worst case guarantees for global performance
- A method for the solution of certain non-linear problems in least squares
- Greedy Quasi-Newton Methods with Explicit Superlinear Convergence
- High-Order Optimization Methods for Fully Composite Problems
This page was built for publication: Regularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) Convergence