A new supermemory gradient method for unconstrained optimization problems
From MaRDI portal
Publication:1758038
DOI10.1007/s11590-011-0328-9zbMath1279.90165OpenAlexW2123676331MaRDI QIDQ1758038
Publication date: 7 November 2012
Published in: Optimization Letters (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11590-011-0328-9
unconstrained optimizationglobal convergencetrust region techniqueODE-based methodssupermemory gradient method
Related Items (4)
A memory gradient method based on the nonmonotone technique ⋮ A new variant of the memory gradient method for unconstrained optimization ⋮ Supermemory gradient methods for monotone nonlinear equations with convex constraints ⋮ A nonmonotone supermemory gradient algorithm for unconstrained optimization
Uses Software
Cites Work
- Unnamed Item
- An ODE-based trust region method for unconstrained optimization problems
- Global convergence of a memory gradient method for unconstrained optimization
- A new class of supermemory gradient methods
- Strong global convergence of an adaptive nonmonotone memory gradient method
- On the limited memory BFGS method for large scale optimization
- Global convergence of a memory gradient method without line search
- Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems
- Some effective methods for unconstrained optimization based on the solution of systems of ordinary differential equations
- Representations of quasi-Newton matrices and their use in limited memory methods
- A class of nonmonotone stabilization trust region methods
- The convergence of subspace trust region methods
- Convergence of supermemory gradient method
- On memory gradient method with trust region for unconstrained optimization
- Optimization theory and methods. Nonlinear programming
- A new super-memory gradient method with curve search rule
- Testing Unconstrained Optimization Software
- Restart procedures for the conjugate gradient method
- A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
This page was built for publication: A new supermemory gradient method for unconstrained optimization problems