A perfect example for the BFGS method
From MaRDI portal
Publication:1949264
DOI10.1007/s10107-012-0522-2zbMath1262.49040OpenAlexW2116538711MaRDI QIDQ1949264
Publication date: 6 May 2013
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-012-0522-2
Nonlinear programming (90C30) Newton-type methods (49M15) Numerical methods based on nonlinear programming (49M37)
Related Items (13)
Forward-backward quasi-Newton methods for nonsmooth optimization problems ⋮ A Riemannian BFGS Method for Nonconvex Optimization Problems ⋮ A quasi-Newton method with Wolfe line searches for multiobjective optimization ⋮ Levenberg-Marquardt multi-classification using hinge loss function ⋮ The projection technique for two open problems of unconstrained optimization problems ⋮ Greedy PSB methods with explicit superlinear convergence ⋮ Properties and parameter estimation of the partly-exponential distribution ⋮ An overview of nonlinear optimization ⋮ An adaptive projection BFGS method for nonconvex unconstrained optimization problems ⋮ A Riemannian BFGS Method Without Differentiated Retraction for Nonconvex Optimization Problems ⋮ Fast methods for computing centroidal Laguerre tessellations for prescribed volume fractions with applications to microstructure generation of polycrystalline materials ⋮ Nonsmooth Variants of Powell's BFGS Convergence Theorem ⋮ Globally convergent Newton-type methods for multiobjective optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The BFGS method with exact line searches fails for non-convex objective functions
- On the convergence of the DFP algorithm for unconstrained optimization when there are only two variables
- Optimization theory and methods. Nonlinear programming
- Global Optimization with Polynomials and the Problem of Moments
- On the Global Convergence of the BFGS Method for Nonconvex Unconstrained Optimization Problems
- Global Convergence of a Cass of Quasi-Newton Methods on Convex Problems
- Variable Metric Method for Minimization
- Quasi-Newton Methods, Motivation and Theory
- Convergence Properties of the BFGS Algoritm
- A Rapidly Convergent Descent Method for Minimization
- A Family of Variable-Metric Methods Derived by Variational Means
- The Convergence of a Class of Double-rank Minimization Algorithms
- A new approach to variable metric algorithms
- On the Convergence of the Variable Metric Algorithm
- Conditioning of Quasi-Newton Methods for Function Minimization
This page was built for publication: A perfect example for the BFGS method