A new variant of the memory gradient method for unconstrained optimization
From MaRDI portal
Publication:1926613
DOI10.1007/s11590-011-0355-6zbMath1258.90070OpenAlexW2102823641MaRDI QIDQ1926613
Publication date: 28 December 2012
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11590-011-0355-6
Related Items (4)
A memory gradient method based on the nonmonotone technique ⋮ Memory gradient method for multiobjective optimization ⋮ A nonmonotone supermemory gradient algorithm for unconstrained optimization ⋮ A memory gradient method for non-smooth convex optimization
Uses Software
Cites Work
- Global convergence of a memory gradient method for unconstrained optimization
- Supermemory descent methods for unconstrained minimization
- Global convergence result for conjugate gradient methods
- A globally convergent version of the Polak-Ribière conjugate gradient method
- A new supermemory gradient method for unconstrained optimization problems
- On memory gradient method with trust region for unconstrained optimization
- Relation between the memory gradient method and the Fletcher-Reeves method
- Convergence Properties of Algorithms for Nonlinear Optimization
- Encyclopedia of Optimization
- Descent Property and Global Convergence of the Fletcher—Reeves Method with Inexact Line Search
- Testing Unconstrained Optimization Software
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Convergence properties of the Fletcher-Reeves method
- A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
- Function minimization by conjugate gradients
- The conjugate gradient method in extremal problems
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: A new variant of the memory gradient method for unconstrained optimization