A new descent memory gradient method and its global convergence
From MaRDI portal
Publication:1937779
DOI10.1007/s11424-011-8150-0zbMath1263.90128OpenAlexW2061960946MaRDI QIDQ1937779
Publication date: 31 January 2013
Published in: Journal of Systems Science and Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11424-011-8150-0
Related Items
A memory gradient method based on the nonmonotone technique, A Shamanskii-like self-adaptive Levenberg-Marquardt method for nonlinear equations, Strong solutions and global attractors for Kirchhoff type equation, Supermemory gradient methods for monotone nonlinear equations with convex constraints, A class of derivative-free CG projection methods for nonsmooth equations with an application to the LASSO problem, Discrete-time Zhang neural networks for time-varying nonlinear optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Convergence properties of the dependent PRP conjugate gradient methods
- The convergence properties of some new conjugate gradient methods
- Convergence of the Polak-Ribiére-Polyak conjugate gradient method
- Convergence of Liu-Storey conjugate gradient method
- On the convergence of a new hybrid projection algorithm
- Global convergence of a modified Fletcher-Reeves conjugate gradient method with Armijo-type line search
- A Dai-Yuan conjugate gradient algorithm with sufficient descent and conjugacy conditions for unconstrained optimization
- A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence
- A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
- Function minimization by conjugate gradients
- A Two-Term PRP-Based Descent Method
- The conjugate gradient method in extremal problems
- Methods of conjugate gradients for solving linear systems