The divergence of the BFGS and Gauss Newton methods
From MaRDI portal
Publication:463731
DOI10.1007/s10107-013-0720-6zbMath1304.49065arXiv1309.7922OpenAlexW2016351407WikidataQ56474753 ScholiaQ56474753MaRDI QIDQ463731
Publication date: 17 October 2014
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1309.7922
Nonlinear programming (90C30) Newton-type methods (49M15) Numerical methods based on nonlinear programming (49M37)
Related Items (6)
A simple canonical form for nonlinear programming problems and its use ⋮ The projection technique for two open problems of unconstrained optimization problems ⋮ A non-linear conjugate gradient in dual space for \(L_p\)-norm regularized non-linear least squares with application in data assimilation ⋮ Inference for accelerated bivariate dependent competing risks model based on Archimedean copulas under progressive censoring ⋮ Domain-decomposition least-squares Petrov-Galerkin (DD-LSPG) nonlinear model reduction ⋮ The divergence of the barycentric Padé interpolants
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Extension of \(C^{m, \omega}\)-smooth functions by linear operators
- On gradients of functions definable in o-minimal structures
- The BFGS method with exact line searches fails for non-convex objective functions
- On the divergence of line search methods
- Newton's iterates can converge to non-stationary points
- A generalized sharp Whitney theorem for jets
- A Comparison of the Existence Theorems of Kantorovich and Moore
- Numerical Optimization
- Convergence Properties of the BFGS Algoritm
- Convergence of the Iterates of Descent Methods for Analytic Cost Functions
- The Newton-Kantorovich Theorem
This page was built for publication: The divergence of the BFGS and Gauss Newton methods