New error bounds and their applications to convergence analysis of iterative algorithms (Q1584006): Difference between revisions
From MaRDI portal
Created a new Item |
Added link to MaRDI item. |
||
links / mardi / name | links / mardi / name | ||
Revision as of 02:10, 1 February 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | New error bounds and their applications to convergence analysis of iterative algorithms |
scientific article |
Statements
New error bounds and their applications to convergence analysis of iterative algorithms (English)
0 references
2 August 2001
0 references
The author presents two new error bounds for optimization problems over a convex set whose objective function \(f\) is either semianalytic or \(\gamma\)-strictlyconvex, with \(\gamma\geq 1\). Then these error bounds are applied to analyze the rate of convergence of a wide class of iterative descent algorithms for the optimization problem. The analysis shows that the function sequence \(\{f(x^k)\}\) converges at least at the sublinear rate of \(k^{-\varepsilon}\) for some positive constant \(\varepsilon\), where \(k\) is the iteration index. Moreover, the distances from the iterate sequence \(\{x^k\}\) to the set of stationary points of the optimization problem converge to zero at least sublinearly.
0 references
nonlinear programming
0 references
error bounds
0 references
convergence
0 references
iterative descent algorithms
0 references