A generalization of Arcangeli's method for ill-posed problems leading to optimal rates (Q1803936): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
ReferenceBot (talk | contribs)
Changed an Item
 
(One intermediate revision by one other user not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5552510 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Discrepancy principles for Tikhonov regularization of ill-posed problems leading to optimal convergence rates / rank
 
Normal rank
Property / cites work
 
Property / cites work: Asymptotic convergence rate of arcangeli's method for III-posed problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the asymptotic order of accuracy of Tikhonov regularization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Parameter choice by discrepancy principles for the approximate solution of ill-posed problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: An optimal parameter choice for regularized ill-posed problems / rank
 
Normal rank

Latest revision as of 16:57, 17 May 2024

scientific article
Language Label Description Also known as
English
A generalization of Arcangeli's method for ill-posed problems leading to optimal rates
scientific article

    Statements

    A generalization of Arcangeli's method for ill-posed problems leading to optimal rates (English)
    0 references
    0 references
    29 June 1993
    0 references
    The author discusses a strategy of parameter choice for the regularization of ill-posed problems, leading to a near-optimal convergence rate. Let \(X\) and \(Y\) be Hilbert spaces and \(T:X\to Y\) be a bounded linear operator. Let \(y^ \delta(\delta>0)\) be inexact data such that \(\| y-y^ \delta\|\leq\delta\) and \(x^ \delta_ \alpha=(T^*T+\alpha I)^{-1}T^*y^ \delta\) for \(\alpha>0\). Similar to \textit{R. Arcangeli}'s method [C. R. Acad. Sci. Paris, Sér. A 263, No. 8, 282-285 (1966; Zbl 0166.411)], a generalized ``discrepancy principle'' is suggested for the choice of \(\alpha=\alpha(\delta)\). It is shown that the strategy leads to a near-optimal convergence rate of \(\{x_ \alpha^ \delta\}\) as \(\delta\to 0\). The results constitute an improvement compared to the work of \textit{E. Schock} [J. Optimization Theor. Appl. 44, 95-104 (1984; Zbl 0531.65031)].
    0 references
    0 references
    discrepancy principle
    0 references
    parameter choice
    0 references
    regularization
    0 references
    ill-posed problems
    0 references
    near-optimal convergence rate
    0 references
    Hilbert spaces
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references