A nonmonotone supermemory gradient algorithm for unconstrained optimization
From MaRDI portal
Publication:741395
DOI10.1007/s12190-013-0747-0zbMath1296.90116MaRDI QIDQ741395
Publication date: 12 September 2014
Published in: Journal of Applied Mathematics and Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s12190-013-0747-0
convergence analysis; unconstrained optimization; nonmonotone line search; supermemory gradient method
65K05: Numerical mathematical programming methods
90C30: Nonlinear programming
49M37: Numerical methods based on nonlinear programming
Related Items
A memory gradient method for non-smooth convex optimization, The higher-order Levenberg–Marquardt method with Armijo type line search for nonlinear equations, Supermemory gradient methods for monotone nonlinear equations with convex constraints, Memory gradient method for multiobjective optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimization of nonlinear geological structure mapping using hybrid neuro-genetic techniques
- Step-size estimation for unconstrained optimization methods
- The convergence of conjugate gradient method with nonmonotone line search
- Hybrid pattern search and simulated annealing for fuzzy production planning problems
- A non-monotone line search algorithm for unconstrained optimization
- A hybrid ODE-based method for unconstrained optimization problems
- A nonmonotonic trust region algorithm for a class of semi-infinite minimax programming
- A nonmonotone trust region method for unconstrained optimization
- Sufficient descent nonlinear conjugate gradient methods with conjugacy condition
- Global convergence of a memory gradient method for unconstrained optimization
- Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems
- A class of nonmonotone conjugate gradient methods for unconstrained optimization
- Nonmonotonic trust region algorithm
- A new supermemory gradient method for unconstrained optimization problems
- Nonmonotone trust region method for solving optimization problems
- A new variant of the memory gradient method for unconstrained optimization
- A new modified nonmonotone adaptive trust region method for unconstrained optimization
- Hybrid LS-SA-PS methods for solving fuzzy non-linear programming problems
- An unconstrained optimization method using nonmonotone second order Goldstein's line search
- On memory gradient method with trust region for unconstrained optimization
- Optimization theory and methods. Nonlinear programming
- Memory gradient method for the minimization of functions
- Study on a supermemory gradient method for the minimization of functions
- A new super-memory gradient method with curve search rule
- A new nonmonotone trust-region method of conic model for solving unconstrained optimization
- HYBRID SIMULATED ANNEALING AND GENETIC ALGORITHMS FOR INDUSTRIAL PRODUCTION MANAGEMENT PROBLEMS
- A Three-Term Conjugate Gradient Method with Sufficient Descent Property for Unconstrained Optimization
- Global convergence of the Polak-Ribière-Polyak conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems
- The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem
- A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence
- A Nonmonotone Line Search Technique and Its Application to Unconstrained Optimization
- A Nonmonotone Line Search Technique for Newton’s Method
- An Assessment of Nonmonotone Linesearch Techniques for Unconstrained Optimization
- A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search
- A NONMONOTONE MEMORY GRADIENT METHOD FOR UNCONSTRAINED OPTIMIZATION
- Benchmarking optimization software with performance profiles.
- On the nonmonotone line search