On the limited memory BFGS method for large scale optimization (Q911463): Difference between revisions
From MaRDI portal
Created claim: Wikidata QID (P12): Q57311064, #quickstatements; #temporary_batch_1705753332619 |
Added link to MaRDI item. |
||
links / mardi / name | links / mardi / name | ||
Revision as of 17:05, 30 January 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | On the limited memory BFGS method for large scale optimization |
scientific article |
Statements
On the limited memory BFGS method for large scale optimization (English)
0 references
1989
0 references
The authors investigate the numerical performance of several optimization algorithms for solving smooth, unconstrained and in particular, large problems. They compare their own code based on limited BFGS-updates with the method of \textit{A. Buckley} and \textit{A. LeNir} [ACM Trans. Math. Software 11, 103-119 (1985; Zbl 0562.65043)], which requires additional conjugate gradient steps, with two conjugate gradient methods and with the partitioned quasi-Newton method of \textit{A. Griewank} and \textit{Ph. L. Toint} [in: Nonlinear optimization, NATO Conf., Ser., Ser. II, 301-312 (1982; Zbl 0563.90085); Math. Program. 28, 25-49 (1984; Zbl 0561.65045) and Numer. Math. 39, 429-448 (1982; Zbl 0505.65018)]. The results are based on 16 test problems where the dimension varies between 50 and 1000. Conclusions of the numerical tests are that the method of the authors is faster than the method of Buckley and LeNir with respect to number of function evaluations and execution time. The method also outperforms the conjugate gradient methods and is competitive to the partitioned quasi- Newton method on dense problems, but inferior on partitioned problems. Moreover scaling effects are evaluated and a convergence analysis for convex problems is presented.
0 references
large scale optimization
0 references
computational study
0 references
comparison of algorithms
0 references
smooth unconstrained problems
0 references
numerical performance
0 references
conjugate gradient methods
0 references
partitioned quasi-Newton method
0 references
scaling effects
0 references
convergence analysis
0 references