A Trust Region Algorithm with Conjugate Gradient Technique for Optimization Problems
From MaRDI portal
Publication:3083525
DOI10.1080/01630563.2010.532273zbMath1213.65090OpenAlexW2100815210MaRDI QIDQ3083525
Publication date: 22 March 2011
Published in: Numerical Functional Analysis and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/01630563.2010.532273
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26)
Related Items (6)
A new adaptive trust region algorithm for optimization problems ⋮ A modified nonmonotone BFGS algorithm for unconstrained optimization ⋮ A modified conjugate gradient algorithm with backtracking line search technique for large-scale nonlinear equations ⋮ A quasi-Newton algorithm for large-scale nonlinear equations ⋮ A modified three-term PRP conjugate gradient algorithm for optimization models ⋮ Correction of trust region method with a new modified Newton method
Cites Work
- Unnamed Item
- Active-set projected trust-region algorithm for box-constrained nonsmooth equations
- Convergence analysis of a modified BFGS method on convex minimizations
- A conjugate gradient method with descent direction for unconstrained optimization
- New nonlinear conjugate gradient formulas for large-scale unconstrained optimization problems
- The convergence properties of some new conjugate gradient methods
- The superlinear convergence analysis of a nonmonotone BFGS algorithm on convex objective functions
- Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems
- BFGS trust-region method for symmetric nonlinear equations
- A modified PRP conjugate gradient method
- Efficient hybrid conjugate gradient techniques
- Global convergence result for conjugate gradient methods
- Trust region algorithm for nonsmooth optimization
- A new trust region method for nonlinear equations
- An affine scaling trust-region approach to bound-constrained nonlinear systems
- New line search methods for unconstrained optimization
- Nonmonotone Trust-Region Methods for Bound-Constrained Semismooth Equations with Applications to Nonlinear Mixed Complementarity Problems
- Convergence Properties of Algorithms for Nonlinear Optimization
- Global convergence of the Polak-Ribière-Polyak conjugate gradient method with an Armijo-type inexact line search for nonconvex unconstrained optimization problems
- A descent modified Polak–Ribière–Polyak conjugate gradient method and its global convergence
- MODIFIED LIMITED MEMORY BFGS METHOD WITH NONMONOTONE LINE SEARCH FOR UNCONSTRAINED OPTIMIZATION
- Descent Property and Global Convergence of the Fletcher—Reeves Method with Inexact Line Search
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
- A New Conjugate Gradient Method with Guaranteed Descent and an Efficient Line Search
- Function minimization by conjugate gradients
- Maximization by Quadratic Hill-Climbing
- An algorithm for solving linearly constrained optimization problems
- A method for the solution of certain non-linear problems in least squares
This page was built for publication: A Trust Region Algorithm with Conjugate Gradient Technique for Optimization Problems