New error bounds and their applications to convergence analysis of iterative algorithms (Q1584006): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
Import240304020342 (talk | contribs)
Set profile property.
(2 intermediate revisions by one other user not shown)
Property / reviewed by
 
Property / reviewed by: Q587517 / rank
Normal rank
 
Property / reviewed by
 
Property / reviewed by: Nada I. Djuranović-Miličić / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank

Revision as of 04:00, 5 March 2024

scientific article
Language Label Description Also known as
English
New error bounds and their applications to convergence analysis of iterative algorithms
scientific article

    Statements

    New error bounds and their applications to convergence analysis of iterative algorithms (English)
    0 references
    0 references
    2 August 2001
    0 references
    The author presents two new error bounds for optimization problems over a convex set whose objective function \(f\) is either semianalytic or \(\gamma\)-strictlyconvex, with \(\gamma\geq 1\). Then these error bounds are applied to analyze the rate of convergence of a wide class of iterative descent algorithms for the optimization problem. The analysis shows that the function sequence \(\{f(x^k)\}\) converges at least at the sublinear rate of \(k^{-\varepsilon}\) for some positive constant \(\varepsilon\), where \(k\) is the iteration index. Moreover, the distances from the iterate sequence \(\{x^k\}\) to the set of stationary points of the optimization problem converge to zero at least sublinearly.
    0 references
    nonlinear programming
    0 references
    error bounds
    0 references
    convergence
    0 references
    iterative descent algorithms
    0 references

    Identifiers