New error bounds and their applications to convergence analysis of iterative algorithms (Q1584006): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Changed an Item
Import241208061232 (talk | contribs)
Normalize DOI.
 
(One intermediate revision by one other user not shown)
Property / DOI
 
Property / DOI: 10.1007/s101070000173 / rank
Normal rank
 
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1007/S101070000173 / rank
 
Normal rank

Latest revision as of 22:07, 10 December 2024

scientific article
Language Label Description Also known as
English
New error bounds and their applications to convergence analysis of iterative algorithms
scientific article

    Statements

    New error bounds and their applications to convergence analysis of iterative algorithms (English)
    0 references
    0 references
    2 August 2001
    0 references
    The author presents two new error bounds for optimization problems over a convex set whose objective function \(f\) is either semianalytic or \(\gamma\)-strictlyconvex, with \(\gamma\geq 1\). Then these error bounds are applied to analyze the rate of convergence of a wide class of iterative descent algorithms for the optimization problem. The analysis shows that the function sequence \(\{f(x^k)\}\) converges at least at the sublinear rate of \(k^{-\varepsilon}\) for some positive constant \(\varepsilon\), where \(k\) is the iteration index. Moreover, the distances from the iterate sequence \(\{x^k\}\) to the set of stationary points of the optimization problem converge to zero at least sublinearly.
    0 references
    nonlinear programming
    0 references
    error bounds
    0 references
    convergence
    0 references
    iterative descent algorithms
    0 references

    Identifiers