Acceleration of conjugate gradient algorithms for unconstrained optimization (Q1029371): Difference between revisions
From MaRDI portal
Changed an Item |
Changed an Item |
||
Property / describes a project that uses | |||
Property / describes a project that uses: CUTEr / rank | |||
Normal rank |
Revision as of 15:57, 28 February 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Acceleration of conjugate gradient algorithms for unconstrained optimization |
scientific article |
Statements
Acceleration of conjugate gradient algorithms for unconstrained optimization (English)
0 references
10 July 2009
0 references
A new approach for the acceleration of conjugate gradient methods is presented. The proposed method is based on the modification of the steplength \(\alpha _k\), computed by Wolfe line search conditions, by means of a positive parameter \(\eta _k\), for the improvement of the behavior of the classical conjugate gradient algorithms which are mainly applied to large-scale unconstrained optimization. It is shown that for uniformly convex functions the convergence of the accelerated algorithm is still linear, but the reduction in function values is significantly improved. Numerical comparisons with conjugate gradient algorithms, using a set of 750 unconstrained optimization problems, show that the accelerated computational scheme outperform the corresponding conjugate gradient algorithms.
0 references
Wolfe line search
0 references
line search gradient methods
0 references
unconstrained optimization
0 references
convergence acceleration
0 references
numerical examples
0 references
conjugate gradient methods
0 references
large-scale
0 references
convergence
0 references
numerical comparisons
0 references