A convergent dynamic method for large minimization problems (Q1124283): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Removed claim: reviewed by (P1447): Item:Q587437
RedirectionBot (talk | contribs)
Changed an Item
Property / reviewed by
 
Property / reviewed by: Hans Benker / rank
 
Normal rank

Revision as of 08:14, 16 February 2024

scientific article
Language Label Description Also known as
English
A convergent dynamic method for large minimization problems
scientific article

    Statements

    A convergent dynamic method for large minimization problems (English)
    0 references
    0 references
    1989
    0 references
    A new gradient algorithm for unconstrained minimization: min F(x), \(x\in R^ n\), requiring no line searches and consequently no function evaluations, is developed. This algorithm approaches the original problem by the associated dynamic problem: solve the equations of motion \(x''(t)=-F(x(t))\) subject to initial conditions \(x(0)=x_ 0\), \(x'(0)=v_ 0=0\) \((x_ 0\) starting point). An approximate solution of this equations is numerically obtained by applying the so-called ``leap-frog'' method. For convex functions the convergence of the given method is proved. The performance of this algorithm is compared with the conjugate gradient algorithm of \textit{M. J. D. Powell} [Math. Programming 12, 241-254 (1977; Zbl 0396.90072)].
    0 references
    comparison of methods
    0 references
    gradient algorithm
    0 references
    unconstrained minimization
    0 references
    dynamic problem
    0 references
    equations of motion
    0 references
    ``leap-frog'' method
    0 references
    convex functions
    0 references
    convergence
    0 references
    conjugate gradient algorithm
    0 references
    0 references

    Identifiers