A conjugate gradient method with global convergence for large-scale unconstrained optimization problems (Q1790099): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Removed claims
RedirectionBot (talk | contribs)
Changed an Item
Property / author
 
Property / author: Xi-wen Lu / rank
 
Normal rank
Property / author
 
Property / author: Zeng-xin Wei / rank
 
Normal rank

Revision as of 12:05, 14 February 2024

scientific article
Language Label Description Also known as
English
A conjugate gradient method with global convergence for large-scale unconstrained optimization problems
scientific article

    Statements

    A conjugate gradient method with global convergence for large-scale unconstrained optimization problems (English)
    0 references
    0 references
    0 references
    0 references
    10 October 2018
    0 references
    Summary: The conjugate gradient (CG) method has played a special role in solving large-scale nonlinear optimization problems due to the simplicity of their very low memory requirements. This paper proposes a conjugate gradient method which is similar to Dai-Liao conjugate gradient method [\textit{Y. H. Dai} and \textit{L. Z. Liao}, Appl. Math. Optim. 43, No. 1, 87--101 (2001; Zbl 0973.65050)] but has stronger convergence properties. The given method possesses the sufficient descent condition, and is globally convergent under strong Wolfe-Powell (SWP) line search for general function. Our numerical results show that the proposed method is very efficient for the test problems.
    0 references

    Identifiers