Strong global convergence of an adaptive nonmonotone memory gradient method
From MaRDI portal
Publication:870222
DOI10.1016/j.amc.2006.07.075zbMath1113.65064OpenAlexW2049218954MaRDI QIDQ870222
Baofeng Wu, Zhensheng Yu, Wei-Guo Zhang
Publication date: 12 March 2007
Published in: Applied Mathematics and Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.amc.2006.07.075
algorithmunconstrained optimizationglobal convergencenumerical examplesmemory gradient methodadaptive nonmonotone technique
Numerical mathematical programming methods (65K05) Nonlinear programming (90C30) Methods of reduced gradient type (90C52)
Related Items
A memory gradient method based on the nonmonotone technique, A new supermemory gradient method for unconstrained optimization problems
Cites Work
- Global convergence of nonmonotone descent methods for unconstrained optimization problems
- Global convergence of a memory gradient method for unconstrained optimization
- Non-monotone trust-region algorithms for nonlinear optimization subject to convex constraints
- Memory gradient method for the minimization of functions
- Study on a supermemory gradient method for the minimization of functions
- Nonmonotone Trust-Region Methods for Bound-Constrained Semismooth Equations with Applications to Nonlinear Mixed Complementarity Problems
- A three-parameter family of nonlinear conjugate gradient methods
- Descent Property and Global Convergence of the Fletcher—Reeves Method with Inexact Line Search
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Nonmonotone Spectral Methods for Large-Scale Nonlinear Systems
- A new class of memory gradient methods with inexact line searches
- A Nonmonotone Line Search Technique for Newton’s Method
- Convergence conditions, line search algorithms and trust region implementations for the Polak–Ribière conjugate gradient method
- On the nonmonotone line search
- Unnamed Item
- Unnamed Item