Global convergence of a memory gradient method for unconstrained optimization
From MaRDI portal
Publication:861515
DOI10.1007/s10589-006-8719-zzbMath1128.90059OpenAlexW1975553457MaRDI QIDQ861515
Hiroshi Yabe, Yasushi Narushima
Publication date: 29 January 2007
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-006-8719-z
unconstrained optimizationglobal convergenceWolfe conditionsdescent search directionmemory gradient method
Related Items (13)
A memory gradient method based on the nonmonotone technique ⋮ New conjugate gradient-like methods for unconstrained optimization ⋮ Strong global convergence of an adaptive nonmonotone memory gradient method ⋮ A new variant of the memory gradient method for unconstrained optimization ⋮ Memory gradient method for multiobjective optimization ⋮ A novel fractional Tikhonov regularization coupled with an improved super-memory gradient method and application to dynamic force identification problems ⋮ Global convergence of a memory gradient method without line search ⋮ Supermemory gradient methods for monotone nonlinear equations with convex constraints ⋮ Conjugate gradient methods using value of objective function for unconstrained optimization ⋮ A new supermemory gradient method for unconstrained optimization problems ⋮ A nonmonotone supermemory gradient algorithm for unconstrained optimization ⋮ A novel method of dynamic force identification and its application ⋮ A memory gradient method for non-smooth convex optimization
Uses Software
Cites Work
- Unnamed Item
- A truncated Newton method with non-monotone line search for unconstrained optimization
- A conjugate direction algorithm without line searches
- New quasi-Newton equation and related methods for unconstrained optimization
- Memory gradient method for the minimization of functions
- Study on a supermemory gradient method for the minimization of functions
- Descent Property and Global Convergence of the Fletcher—Reeves Method with Inexact Line Search
- Testing Unconstrained Optimization Software
- Updating Quasi-Newton Matrices with Limited Storage
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Numerical Optimization
- Line search algorithms with guaranteed sufficient decrease
- Properties and numerical performance of quasi-Newton methods with modified quasi-Newton equations
This page was built for publication: Global convergence of a memory gradient method for unconstrained optimization