Rates of convergence for adaptive Newton methods
From MaRDI portal
Publication:802470
DOI10.1007/BF00938596zbMath0558.90082OpenAlexW146697093MaRDI QIDQ802470
Publication date: 1986
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/bf00938596
constrained optimizationNewton-type methodsinfinite-dimensional spacessuperlinear rates of convergence
Nonlinear programming (90C30) Newton-type methods (49M15) Numerical methods based on nonlinear programming (49M37) Programming in abstract spaces (90C48)
Related Items
Convergence of algorithms for perturbed optimization problems, Approximate quasi-Newton methods, Global convergence of inexact reduced sqp methods, A survey of truncated-Newton methods, Degeneracy in NLP and the development of results motivated by its presence, On the use of consistent approximations in the solution of semi-infinite optimization and optimal control problems
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The effect of perturbations on the convergence rates of optimization algorithms
- Newton's method for singular constrained optimization problems
- On theoretical and numerical aspects of the bang-bang-principle
- Diagonally Modified Conditional Gradient Methods for Input Constrained Optimal Control Problems
- Newton’s Method and the Goldstein Step-Length Rule for Constrained Minimization Problems
- Inexact Newton Methods
- Perturbed Kuhn-Tucker points and rates of convergence for a class of nonlinear-programming algorithms
- Sensitivity analysis for nonlinear programming using penalty methods
- An Adaptive Precision Method for Nonlinear Optimization Problems
- Rates of Convergence for Conditional Gradient Algorithms Near Singular and Nonsingular Extremals
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- An Adaptive Precision Gradient Method for Optimal Control
- Convergence of methods of feasible directions in extremal problems
- Minimization methods based on approximation of the initial functional by a convex functional