A new adaptive trust region algorithm for optimization problems
From MaRDI portal
Publication:1637037
DOI10.1016/S0252-9602(18)30762-8zbMath1399.65135OpenAlexW2794414816MaRDI QIDQ1637037
Zhou Sheng, Zengru Cui, Gong Lin Yuan
Publication date: 7 June 2018
Published in: Acta Mathematica Scientia. Series B. (English Edition) (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/s0252-9602(18)30762-8
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26)
Related Items (7)
A conjugate gradient algorithm for large-scale unconstrained optimization problems and nonlinear equations ⋮ Adaptive three-term PRP algorithms without gradient Lipschitz continuity condition for nonconvex functions ⋮ Family weak conjugate gradient algorithms and their convergence analysis for nonconvex functions ⋮ Adaptive trust-region method on Riemannian manifold ⋮ A tensor trust-region model for nonlinear system ⋮ A conjugate gradient algorithm and its application in large-scale optimization problems and image restoration ⋮ A modified HZ conjugate gradient algorithm without gradient Lipschitz continuous condition for non convex functions
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A modified Hestenes and Stiefel conjugate gradient algorithm for large-scale nonsmooth minimizations and nonlinear equations
- An adaptive conjugate gradient algorithm for large-scale unconstrained optimization
- A nonmonotone trust region method with adaptive radius for unconstrained optimization problems
- Nonmonotone adaptive trust region method
- An improved trust region algorithm for nonlinear equations
- A BFGS trust-region method for nonlinear equations
- Convergence analysis of a modified BFGS method on convex minimizations
- A conjugate gradient method with descent direction for unconstrained optimization
- The superlinear convergence analysis of a nonmonotone BFGS algorithm on convex objective functions
- Modified nonlinear conjugate gradient methods with sufficient descent property for large-scale optimization problems
- A limited memory BFGS-type method for large-scale unconstrained optimization
- A modified PRP conjugate gradient method
- A new trust region method for nonlinear equations
- Neural networks in optimization
- A hybrid of adjustable trust-region and nonmonotone algorithms for unconstrained optimization
- A new regularized quasi-Newton algorithm for unconstrained optimization
- A new modified nonmonotone adaptive trust region method for unconstrained optimization
- A modified Polak-Ribière-Polyak conjugate gradient algorithm for nonsmooth convex programs
- Global convergence of BFGS and PRP methods under a modified weak Wolfe-Powell line search
- A three-terms Polak-Ribière-Polyak conjugate gradient algorithm for large-scale nonlinear equations
- Recent advances in trust region algorithms
- A quasi-Newton trust region method with a new conic model for the unconstrained optimization
- The global convergence of a modified BFGS method for nonconvex functions
- A new non-monotone self-adaptive trust region method for unconstrained optimization
- A new trust region method for unconstrained optimization
- New quasi-Newton methods for unconstrained optimization problems
- An adaptive trust region method and its convergence
- A Modified Hestenes-Stiefel Conjugate Gradient Algorithm for Large-Scale Optimization
- A Trust Region Algorithm with Conjugate Gradient Technique for Optimization Problems
- A New Modified Cholesky Factorization
- The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
- Testing Unconstrained Optimization Software
- A New Algorithm for Unconstrained Optimization
- An improved trust region method for unconstrained optimization
- Benchmarking optimization software with performance profiles.
This page was built for publication: A new adaptive trust region algorithm for optimization problems