Convergence of line search methods for unconstrained optimization
From MaRDI portal
Publication:1881700
DOI10.1016/j.amc.2003.08.058zbMath1072.65087OpenAlexW2081842231MaRDI QIDQ1881700
Publication date: 14 October 2004
Published in: Applied Mathematics and Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.amc.2003.08.058
Related Items
The Hager–Zhang conjugate gradient algorithm for large-scale nonlinear equations, Convergence of quasi-Newton method with new inexact line search, New inexact line search method for unconstrained optimization, A new descent algorithm using the three-step discretization method for solving unconstrained optimization problems, The convergence of subspace trust region methods, Nonmonotone adaptive trust region method, An efficient descent direction method with cutting planes, Heterogeneous Mediation Analysis on Epigenomic PTSD and Traumatic Stress in a Predominantly African American Cohort, An accelerated double step size model in unconstrained optimization, A note on hybridization process applied on transformed double step size model, Two modifications of the method of the multiplicative parameters in descent gradient methods, Accelerated double direction method for solving unconstrained optimization problems, A survey of gradient methods for solving nonlinear optimization, Multivariate spectral gradient method for unconstrained optimization, A modified conjugate gradient algorithm with backtracking line search technique for large-scale nonlinear equations, Convergence of PRP method with new nonmonotone line search, Hybridization of accelerated gradient descent method, A Modified PRP Conjugate Gradient Algorithm with Trust Region for Optimization Problems, A new trust region method for unconstrained optimization, Modified nonmonotone Armijo line search for descent method, A reduced-space line-search method for unconstrained optimization via random descent directions, A conjugate gradient algorithm for large-scale nonlinear equations and image restoration problems, On step-size estimation of line search methods, Convergence of nonmonotone line search method, A new class of memory gradient methods with inexact line searches, An acceleration of gradient descent algorithm with backtracking for unconstrained opti\-mi\-za\-tion, A New Method with Descent Property for Symmetric Nonlinear Equations, New line search methods for unconstrained optimization, Accelerated gradient descent methods with line search, A conjugate gradient method with descent direction for unconstrained optimization, Computer Algebra and Line Search, A modified PRP conjugate gradient method, A new trust region method with adaptive radius, A descent algorithm without line search for unconstrained optimization, Convergence of descent method without line search, Accelerated multiple step-size methods for solving unconstrained optimization problems, Optimization on the hierarchical Tucker manifold - applications to tensor completion, Convergence of descent method with new line search
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Stepsize analysis for descent methods
- Augmentability in optimization theory
- Enlarging the region of convergence of Newton's method for constrained optimization
- Convergence rates of a global optimization algorithm
- On linear convergence of gradient-type minimization algorithms
- A generalized conjugate gradient algorithm
- A class of nonmonotone conjugate gradient methods for unconstrained optimization
- Some convergence properties of descent methods
- Combining search directions using gradient flows
- Convergence of implementable descent algorithms for unconstrained optimization
- Efficent line search algorithm for unconstrained optimization
- A new unconstrained optimization method for imprecise function and gradient values
- A dimension-reducing method for unconstrained optimization
- A class of gradient unconstrained minimization algorithms with adaptive stepsize
- Minimization of functions having Lipschitz continuous first partial derivatives
- On Convergence Properties of Algorithms for Unconstrained Minimization
- Numerical Optimization
- Gradient Convergence in Gradient methods with Errors
- Convergence Properties of Nonlinear Conjugate Gradient Methods
- Global and superlinear convergence of an algorithm for one-dimensional minimization of convex functions