A convergent dynamic method for large minimization problems (Q1124283): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
 
(2 intermediate revisions by 2 users not shown)
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1016/0898-1221(89)90020-5 / rank
Normal rank
 
Property / OpenAlex ID
 
Property / OpenAlex ID: W2087847279 / rank
Normal rank
 
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1016/0898-1221(89)90020-5 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2087847279 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A new and dynamic method for unconstrained minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: An improved version of the original leap-frog dynamic method for unconstrained minimization: LFOP1(b) / rank
 
Normal rank
Property / cites work
 
Property / cites work: A New Method for Solving Nonlinear Simultaneous Equations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Restart procedures for the conjugate gradient method / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3857636 / rank
 
Normal rank

Latest revision as of 09:21, 20 June 2024

scientific article
Language Label Description Also known as
English
A convergent dynamic method for large minimization problems
scientific article

    Statements

    A convergent dynamic method for large minimization problems (English)
    0 references
    0 references
    1989
    0 references
    A new gradient algorithm for unconstrained minimization: min F(x), \(x\in R^ n\), requiring no line searches and consequently no function evaluations, is developed. This algorithm approaches the original problem by the associated dynamic problem: solve the equations of motion \(x''(t)=-F(x(t))\) subject to initial conditions \(x(0)=x_ 0\), \(x'(0)=v_ 0=0\) \((x_ 0\) starting point). An approximate solution of this equations is numerically obtained by applying the so-called ``leap-frog'' method. For convex functions the convergence of the given method is proved. The performance of this algorithm is compared with the conjugate gradient algorithm of \textit{M. J. D. Powell} [Math. Programming 12, 241-254 (1977; Zbl 0396.90072)].
    0 references
    comparison of methods
    0 references
    gradient algorithm
    0 references
    unconstrained minimization
    0 references
    dynamic problem
    0 references
    equations of motion
    0 references
    ``leap-frog'' method
    0 references
    convex functions
    0 references
    convergence
    0 references
    conjugate gradient algorithm
    0 references
    0 references

    Identifiers