Nonlinear solution of linear inverse problems by wavelet-vaguelette decomposition (Q1893655): Difference between revisions
From MaRDI portal
Created a new Item |
Added link to MaRDI item. |
||
links / mardi / name | links / mardi / name | ||
Revision as of 13:41, 1 February 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Nonlinear solution of linear inverse problems by wavelet-vaguelette decomposition |
scientific article |
Statements
Nonlinear solution of linear inverse problems by wavelet-vaguelette decomposition (English)
0 references
20 July 1995
0 references
A stochastic version of linear inverse problems is treated. It is assumed that the data upon which the inversion is to be based, \(y(u)\), is given by \(y(u) = (Kf) (u) + z(u)\), \(u \in U\), where \(z\) is a noise, and it is necessary to recover \(f\) from the data \(y\). The \(L^2\) norm is used to measure the quality of recovery. In the cases which are of most interest, \(K\) is not invertible and the problem is ill-posed. The well-known standard approach to the problem is the so-called singular value decomposition (SVD) of the inverse problem; this approach is based on a regularization of A. N. Tikhonov's type. The author describes the wavelet-vaguelette decomposition (WVD) of a linear inverse problem and uses it instead of the SVD. He proposes to solve the problem by nonlinear ``shrinking'' the WVD coefficients of the noisy, indirect data. The author shows that the WVD exists for a class of special inverse problems of homogeneous type (numerical differentiation, inversion of Abel-type transforms, certain convolution transforms, and the Radon transform). Orthogonal wavelet bases which serve as unconditional bases for any of the spaces in the Besov and Triebel-Lisorkin scales are used (by the way, wavelet bases are useful for data compressions). The author's approach offers significant advantages over traditional SVD inversion in recovering spatially inhomogeneous objects. The author supposes that observations are contaminated by white noise and the object, \(f\) is an unknown element of a Besov space. He proves that nonlinear WVD shrinkage can be tuned to attain the minimax rate of convergence, for \(L^2\) loss, over the entire scale of Besov spaces. The case of Besov spaces \(B^\sigma_{p,q}\), \(p < 2\), which model spatial inhomogeneity, is included. In comparison, linear procedures (SVD included) cannot attain optimal rates of convergence over such classes in the case \(p < 2\). This is the main result of the paper. In particular, these methods achieve faster rates of convergence for objects known to lie in the bump algebra or in bounded variation than any linear procedure. A brief survey of the subject is also presented; the list of references contains 60 items.
0 references
ill-posed problem
0 references
numerical differentiation
0 references
orthogonal wavelet bases
0 references
Besov scales
0 references
stochastic version
0 references
linear inverse problems
0 references
singular value decomposition
0 references
regularization
0 references
wavelet-vaguelette decomposition
0 references
nonlinear ``shrinking''
0 references
inversion of Abel-type transforms
0 references
convolution transforms
0 references
Radon transform
0 references
Triebel-Lisorkin scales
0 references
data compressions
0 references
white noise
0 references
Besov space
0 references
convergence
0 references