Global convergence of a memory gradient method for unconstrained optimization
From MaRDI portal
Publication:861515
DOI10.1007/s10589-006-8719-zzbMath1128.90059MaRDI QIDQ861515
Yasushi Narushima, Hiroshi Yabe
Publication date: 29 January 2007
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-006-8719-z
unconstrained optimization; global convergence; Wolfe conditions; descent search direction; memory gradient method
90C52: Methods of reduced gradient type
Related Items
A nonmonotone supermemory gradient algorithm for unconstrained optimization, Strong global convergence of an adaptive nonmonotone memory gradient method, Global convergence of a memory gradient method without line search, Conjugate gradient methods using value of objective function for unconstrained optimization, A new supermemory gradient method for unconstrained optimization problems, A new variant of the memory gradient method for unconstrained optimization, New conjugate gradient-like methods for unconstrained optimization
Uses Software
Cites Work
- Unnamed Item
- A truncated Newton method with non-monotone line search for unconstrained optimization
- A conjugate direction algorithm without line searches
- New quasi-Newton equation and related methods for unconstrained optimization
- Memory gradient method for the minimization of functions
- Study on a supermemory gradient method for the minimization of functions
- Descent Property and Global Convergence of the Fletcher—Reeves Method with Inexact Line Search
- Testing Unconstrained Optimization Software
- Updating Quasi-Newton Matrices with Limited Storage
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Numerical Optimization
- Line search algorithms with guaranteed sufficient decrease
- Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations