A convergent dynamic method for large minimization problems (Q1124283): Difference between revisions
From MaRDI portal
m rollbackEdits.php mass rollback Tag: Rollback |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1016/0898-1221(89)90020-5 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2087847279 / rank | |||
Normal rank |
Revision as of 08:35, 20 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | A convergent dynamic method for large minimization problems |
scientific article |
Statements
A convergent dynamic method for large minimization problems (English)
0 references
1989
0 references
A new gradient algorithm for unconstrained minimization: min F(x), \(x\in R^ n\), requiring no line searches and consequently no function evaluations, is developed. This algorithm approaches the original problem by the associated dynamic problem: solve the equations of motion \(x''(t)=-F(x(t))\) subject to initial conditions \(x(0)=x_ 0\), \(x'(0)=v_ 0=0\) \((x_ 0\) starting point). An approximate solution of this equations is numerically obtained by applying the so-called ``leap-frog'' method. For convex functions the convergence of the given method is proved. The performance of this algorithm is compared with the conjugate gradient algorithm of \textit{M. J. D. Powell} [Math. Programming 12, 241-254 (1977; Zbl 0396.90072)].
0 references
comparison of methods
0 references
gradient algorithm
0 references
unconstrained minimization
0 references
dynamic problem
0 references
equations of motion
0 references
``leap-frog'' method
0 references
convex functions
0 references
convergence
0 references
conjugate gradient algorithm
0 references