Wavelet-based multilevel methods for linear ill-posed problems (Q639969)

From MaRDI portal
Revision as of 12:13, 4 July 2024 by ReferenceBot (talk | contribs) (‎Changed an Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Wavelet-based multilevel methods for linear ill-posed problems
scientific article

    Statements

    Wavelet-based multilevel methods for linear ill-posed problems (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    11 October 2011
    0 references
    The goal of this paper is the numerically efficient and stable approximate solution of linear Fredholm integral equations of the first kind \[ \int \limits_a^b \kappa(t,s) x(s) ds = g(t), \qquad a \leq t \leq b, \] by means of wavelet-based multilevel methods. As is well-known such integral equations that can be written as linear operator equations \( Ax=g\) are ill-posed, in particular if the operator \(A\) mapping in \(L^2(a,b)\) is compact. Then regularization methods are required if only noisy data \(g^\delta\) of \(g\) with \(\|g^\delta-g\| \leq \delta\) and noise level \(\delta>0\) are available. In the paper under consideration an adapted version of \textit{regularization by discretization} is suggested, where \textit{cascadic multilevel methods} are applied to the unregularized problem. Precisely, an approximate solution is determined on each level of discretization by a few iterations of a conjugate gradient or related iteration method. The stabilizing effect comes from the restriction of the number of iterations carried out on that level with the aid of the discrepancy principle. After finishing the iterations of a fixed level the computations proceed to the next finer level. The process stops if the discrepancy principle is satisfied on the finest level. It is shown that the cascadic multiresolution techniques based on conjugate residual or MR-II methods and in the non-symmetric case on conjugate gradient methods applied to the associated normal equations (CGNR) represent regularization methods in a well-defined sense. From practical point of view the number of iterations performed is important. So one main purpose of the paper is to determine a suitable number of iterations on each level based on a combination of the discrepancy principle with an error estimation in the right-hand side on a fixed level of discretization. The operational reliability of this multilevel framework is tested by means of some numerical experiments, one with focus on a tomography problem. For that application the ML-CGNR algorithm recommended there performs well if the Besov smoothness of the expected solution is high enough.
    0 references
    0 references
    linear operator equation
    0 references
    ill-posed problem
    0 references
    wavelet
    0 references
    regularization by discretization
    0 references
    cascadic multilevel method
    0 references
    minimal residual method
    0 references
    discrepancy principle
    0 references
    Fredholm integral equations of the first kind
    0 references
    conjugate gradient
    0 references
    error estimation
    0 references
    numerical experiments
    0 references
    tomography
    0 references
    algorithm
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references