Regularization algorithms for ill-posed problems (Q1690183): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
Import240304020342 (talk | contribs)
Set profile property.
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank

Revision as of 04:20, 5 March 2024

scientific article
Language Label Description Also known as
English
Regularization algorithms for ill-posed problems
scientific article

    Statements

    Regularization algorithms for ill-posed problems (English)
    0 references
    0 references
    0 references
    0 references
    19 January 2018
    0 references
    This monograph provides a well-prepared and comprehensive treatment of regularization methods for linear and nonlinear ill-posed problems in Hilbert and Banach spaces. In what follows, we summarize the key features of this book. The text consists of six chapters. The first chapter provides a brief background from functional analysis. Basic concepts of ill-posedness and general ideas for the construction of regularization algorithms are also considered. Chapter 2 is devoted to the regularization of linear ill-posed operator equations \( A x = f \) in Hilbert and Banach spaces. For bounded linear operators \( A: X_1 \to X_2 \), with \( X_1, X_2 \) being Hilbert spaces, the considered schemes, in case of exactly given operator and right-hand side, are of the general form \( x_{\alpha} = (E- \theta(A^*A,\alpha)A^*A ) \xi + \theta(A^*A,\alpha)A^* f \), where \( \theta \) is some generating function, and \( E \) denotes the identity operator in \( X_1 \). In addition, \( \xi \in X_1 \) is some initial guess to a solution of \( Ax=f \), and \( \alpha > 0 \) is a small regularization parameter. Prominent examples are given by the iterated Tikhonov regularization, Landweber iteration, and the abstract Cauchy problem method. For symmetric, positive semidefinite operators \( A: X \to X \) in a Hilbert space \( X \), a similar class is considered: \( x_{\alpha} = (E- \theta(A,\alpha)A ) \xi + \theta(A,\alpha) f \). The same class of schemes is considered for bounded linear operators \( A: X \to X \) in a Banach space \( X \) using the Riesz-Dunford calculus. The generating function \( \theta(\lambda,\alpha) \) is assumed to be holomorphic in \( \lambda \) on a domain containing the spectrum \( \sigma(A) \), and the operator \( A \) satisfies a sectorial resolvent condition. For all three cases, conditions are given that guarantee convergence \( x_{\alpha} \to x^* \) or rates of convergence as \( \alpha \to 0 \), where \( x^* \) denotes some solution of the equation \( Ax=f \). This also includes logarithmic source conditions, so that the results can be applied to ill-posed Cauchy problems. A priori parameter choice strategies for noisy data \( A_h \approx A \) and \( f_\delta \approx f \) are provided, and converse results are given in comprehensive form. One section in this chapter deals with random noise. In Chapter 3, multistep methods for solving ill-posed Cauchy problems of the form \( x^\prime(t) = Ax(t), 0 \leq t \leq T, x(0) = f \), are considered, where \( A: D(A) \subset X \to X \) is an unbounded linear operator in a Banach space \( X \), with \( \overline{D(A)} = X \), and \( f \in D(A) \) is given. The operator \( A \) is assumed to satisfy a sectorial resolvent condition of the form \( \sigma(A) \subset K_0(\varphi_0), \;\| (\lambda I - A)^{-1} \| \leq \frac{r_1}{1+ | \lambda |} \) for all \( \lambda \in \mathbb{C} \backslash K_0(\varphi_0) \), where \( K_0(\varphi_0) = \{ \lambda \in \mathbb{C}\backslash\{0\} : | \arg \lambda | < \varphi_0 \} \), and \( r_1> 0, \, 0 < \varphi_0< \pi/2 \). The considered multistep methods are of the general form \( \sum_{j=0}^{k} \alpha_j x_{n+j} = \Delta t \sum_{j=0}^{k} \beta_j Ax_{n+j} \) for \( 0 \leq n \leq N-k \), where \( \alpha_j, \beta_j \) are given coefficients, and \( \Delta t = T/N \). Estimates for the error \( \| x_n - x(n \Delta t) \| \) are given, provided that one of the following two smoothness assumptions is satisfied: (a) the classical solution exists on a certain larger segment \( [ 0, T_1 ] \) with \( T_1 > T \), or (b) \( x(T) \in R(A^{-q}) \) with some \( q \geq 1 \). The regularizing properties for noisy data \( f_\delta \in X \) satisfying \( \| f_\delta - f \| \leq \delta \) are considered, and the necessity of source conditions is also treated. The subject of Chapters 4 and 5 is the iterative regularization of equations of the form \( F(x) = 0 \) (excluding the last two sections of Chapter 4 and the last section of Chapter 5 which deal with \( F(x) = f \)). Here \( F: X_1 \to X_2 \) is a nonlinear Fréchet differentiable operator acting between Hilbert spaces \( X_1 \) and \( X_2 \). In what follows, \( x^* \) denotes a desired solution of the considered equation, and \( x_0,\xi \in X_1 \) are initial guesses to \( x^* \). The following class of iteratively regularized Gauß-Newton type methods is studied first in Chapter 4, \[ x_{n+1} = \xi - \theta(F^{\prime}(x_n)^\ast F^{\prime}(x_n), \alpha_n) F^{\prime}(x_n)^\ast [ F(x_n) - F^{\prime}(x_n) (x_n - \xi)], \] where \( \{ \alpha_n \} \) is a sequence of positive regularization parameters with \( \alpha_n \to 0 \) as \( n \to \infty \), and \( \theta = \theta(\lambda, \alpha) \) is a generating function. A prominent example is given by \( \theta(\lambda, \alpha) = (\lambda+ \alpha)^{-1} \), yielding the classical iteratively regularized Gauß-Newton method. It is shown that \( \| x_n - x^\ast \| = \mathcal{O}(\alpha_n^p) \) holds as \( n \to \infty \), if a source condition \( x^* - \xi \in R((F^{\prime}(x^\ast)^\ast F^{\prime}(x^\ast))^p) \) holds for some \( p \geq \tfrac{1}{2} \), and if the generating function \( \theta \) satisfies several conditions. The results are extended to noisy right-hand sides and operators, and to approximate source representations as well. In addition, a version of the method is considered which involves metric projections. Also in Chapter 4, a class of iteratively regularized Newton-Kantorovich type methods \[ x_{n+1} = \xi - \theta(F^{\prime}(x_n), \alpha_n) ( F(x_n) - F^{\prime}(x_n) (x_n - \xi)) \] is considered, where \( X_1=X_2 \) is assumed. Results are presented which are similar to those for the iteratively regularized Gauß-Newton method. The necessity of source conditions for the iteratively regularized Gauß-Newton and Newton-Kantorovich methods is also considered, respectively. In Chapter 4, also a class of iteratively regularized gradient methods \[ x_{n+1} = x_n - \mu_n( \widetilde{F}^{\prime}(x_n)^\ast \widetilde{F}^{\prime}(x_n) + \alpha_n (x_n - \xi)) \] is treated, where \( \widetilde{F}: X_1 \to X_2 \) denotes an approximation to the exact operator \( F \). In the subsequent section, the authors investigate both the iteratively Gauß-Newton method and the regularized gradient method under large noise, respectively, i.e., the additive noise in operators is small with respect to weak topology but not necessarily small in norm. Appropriate a priori stopping rules are provided. Another section treats the class of iteratively Gauß-Newton type methods under random errors in the right-hand side. The final section of Chapter 4 deals with an iterative procedure for determining a minimizer \( x_\alpha^\delta \) of the Tikhonov functional \( \widetilde{T}_\alpha(x) = \frac{1}{2} \| F(x) - \widetilde{f} \|^2 + \frac{1}{2} \| x - \xi \|^2 \), where \( \| \widetilde{f} - f \| \leq \delta \) holds. The iterative processes the authors propose in this section have two levels. On the outer level, the regularization parameter \( \alpha \) is reduced in each iteration, while the inner iterative process is finite and uses the current parameter \( \alpha \) >0. The approximation obtained in the inner cycle serves as a starting point for the next iterative approximation of \( x_\alpha^\delta \) corresponding to the reduced regularization parameter \( \alpha \). In the initial steps, this process aims to approach a small level set of the Tikhonov functional containing the desired point \( x_\alpha^\delta \). As soon as this level set is attained, the subsequent minimization can be realized by any iterative relaxation method that ensures convergence for every strongly convex objective functional. In Chapter 5, finite-dimensional iterative procedures for solving the equation \( F(x) = 0 \) are considered (excluding the final section of Chapter 5 which deals with \( F(x) = f \)). It is assumed throughout this chapter that \( \widetilde{F}: X_1 \to X_2 \) is some available approximation to \( F \), if not further specified. In addition, \( x^* \) denotes a desired solution of the considered equation. In Section 5.1, Landweber type iterations of the form \[ x_{n+1} = \widetilde{\Pi}_N^{(m)}(x_0)(x_n - \xi -\gamma \widetilde{F}^{\prime}(x_n)^\ast \widetilde{F}(x_n)) + \xi \] are studied. Both operators \( F \) and \( \widetilde{F} \) are assumed to be Fréchet differentiable on \( X_1 \), and \( F^\prime(x) \) and \( \widetilde{F}^\prime(x) \) are assumed to be compact operators for all \( x \) contained in a certain closed ball centered at a solution \( x^* \) of the equation \( F(x) = 0 \). Moreover, \(\widetilde{\Pi}_N^{(m)}(x_0), \, m = 0,1, \ldots \), denote approximations to the orthogonal projection from \( X_1 \) onto a linear subspace that is spanned by all eigenvectors of the operator \( \widetilde{F}^{\prime}(x_0)^\ast \widetilde{F}^{\prime}(x_0) \) corresponding to eigenvalues \( \lambda \geq N \). Estimates of the error \( \| x_n-x^* \| \) are provided. Section~5.2 deals with the convergence of the iteration \[ x_{n+1} = P_{\mathcal{M}}(x_n - \xi -\gamma \widetilde{F}^{\prime}(x_n)^\ast \widetilde{F}(x_n)) + \xi, \] where \( P_{\mathcal{M}} \) denotes the orthogonal projection onto a finite-dimensional subspace \( \mathcal{M} \) which is not necessarily related with the operator \( F \). The subject of Section 5.3 are optimization problems of the form \[ \min_{x\in\mathcal{M}} \widetilde{J}_{\mathcal{M},\xi}(x), \quad \widetilde{J}_{\mathcal{M},\xi}(x) = \tfrac{1}{2} \| \widetilde{F}(x+\xi - P_\mathcal{M} \xi) \|^2, \quad x \in \mathcal{M}. \] Strong convexity of the functional \( \widetilde{J}_{\mathcal{M},\xi} \) and existence of stationary points are considered. In addition, error estimates of the limit points of convergent relaxational methods are provided. Section 5.4 deals with a posteriori estimates of approximate solutions. Let \( \widetilde{F}: \mathcal{M} \to X_2 \) be an approximation of \( F \), where \( \mathcal{M} \subset X_1 \) is some finite-dimensional subspace. The main concern of this section is to give estimates for the error \( \| z - x^* \| \) provided that \( z \in \mathcal{M} \) is an element which satisfies \( \| \widetilde{F}(z) \| \leq \delta \), and in addition it is assumed that \( \widetilde{F}^{\prime}(z)^\ast \widetilde{F}^{\prime}(z): \mathcal{M} \to \mathcal{M} \) is continuously invertible. Section 5.5 is concerned with finite-dimensional approximations of equations \( F(x) = f, x \in D \), where \( F: X_1 \to X_2 \) is continuous on a closed, convex and bounded subset \( D \subset X_1 \). In addition, the problem is assumed to be conditionally well-posed, i.e., \( \| F(x) - F(y) \| \geq m \| x - y \|^p \) for \( x, y \in D \), with \( m> 0 \) and some given parameter \( 1 \leq p \leq 2 \). Let \( \widetilde{J}(x) = \tfrac{1}{2} \| F(x) - \widetilde{f} \|^2 \), where \( \widetilde{f} \in X_2 \) is available such that \( \| \widetilde{f} - f \| \leq \delta \). In addition, \( D_N = D \cap \mathcal{H}_N \), where \( \mathcal{H}_N \subset X_1 \) is some finite-dimensional subspace. Estimates for the error \( \| \widetilde{x}_N^* - x^* \| \) are provided, where \( \widetilde{x}_N^* = \text{arg} \min \{\widetilde{J}(x) : x \in D_N \} \). Chapter 6 is concerned with the regularization of nonlinear variational inequalities and optimization problems. Section 6.1 introduces to variational inequalities \( (F(x),x-z) \leq 0 \) for all \( z \in Q \), where \( F: Q \to X \) is a monotone operator in a real Hilbert space \( X \), and \( Q \) is a closed convex subset of \( X \). Equivalent formulations for variational inequalities are given, e.g., \( P_Q(x-\beta F(x)) = x \), where \( \beta > 0 \), and \( P_Q \) denotes the metric projection onto the set \( Q \). In addition, existence and uniqueness results for variational inequalities are provided, and the regularizing properties of a regularized variational inequality \( (F_\alpha(x),x-z) \leq 0 \;\forall z \in Q \) are investigated, where \( F_\alpha(x) = F(x) + \alpha (x-\xi) \). Here and it what follows in this chapter, \( \xi \in X \) denotes an initial guess to a solution of the considered unregularized variational inequality. In Section 6.2, convergence of the simple iterative method \( x_{n+1} = P_Q(x_n-\beta_n(F(x_n) + \alpha_n(x_n-\xi))), \;n = 0,1,\ldots \), with \( x_0 \in Q, \alpha_n, \beta_n > 0 \), is investigated. Another topic is the iteratively regularized Newton-Kantorovich type method \[ x_{n+1} = P_Q\{x - \beta_n [ F(x_n) + \alpha_n (x_n - \xi) + (F^{\prime}(x_n) + \alpha_n E)(x-x_n) ]\}. \] Section 6.3 deals with the regularizing properties of the simple iterative method just introduced, with the exact operator \( F \) replaced by the perturbed version \( \widetilde{F} \), where \( \| \widetilde{F}(x) - F(x) \| \leq \delta \) for all \( x \in Q \). Section 6.4 is concerned with the special case \( Q = \{ x \in X : \| x \| \leq R \} \). In addition, the considered variational inequality is assumed to have a solution \( x^* \) on the boundary of the ball \( Q \) which thus also solves the equation \( \mathcal{F}_\beta(x) := x - \tfrac{R}{\| x-\beta F(x) \| } (x-\beta F(x)) = 0 \) for \( x \in X \). The subject of this section is to examine, e.g., the properties of the Fréchet derivative \( \mathcal{F}^\prime_\beta(x^*) \). In Section 6.5, the optimization problem \( \min \{J(x) : x \in Q \} \) is considered, where \( J \) is a real-valued functional on a closed closed subset \( Q \) of a real Hilbert space \( X \). The exact cost functional \( J \) is approximately given only, i.e., a functional \( \widetilde{J} \) is available which satisfies \( | \widetilde{J}(x) - J(x) | \leq \delta(1+ \tfrac{1}{2} \| x-\xi \|^2) \) for \( x \in Q \), where \( \xi \in X \) is an a priori guess of the desired solution. Let \( \widetilde{T}_\alpha(x) = \widetilde{J}(x) + \tfrac{\alpha}{2} \| x-\xi \|^2 \), where \( \alpha >0 \) is a regularization parameter. The authors investigate a version of Tikhonov's scheme in which the approximation \( \widetilde{x} = x_{\alpha,\varepsilon}^\delta \in Q \) satisfies \( \inf_{x\in Q} \widetilde{T}_\alpha(x) \leq \widetilde{T}_\alpha(\widetilde{x}) \leq \inf_{x\in Q} \widetilde{T}_\alpha(x) + \varepsilon \). In Section 6.6, which concludes the book, the authors consider conditionally well-posed problems, i.e., the corresponding operator \( G \), considered for evaluation, is relatively continuous on its domain of definition. It is shown that \( G \) admits a continuous extension to the whole space. This fact is used to show that those problems can be regularized independently of the error level in the input data. This monograph provides a very well-written, competent, easy-to-read, and self-contained survey on a wide range of methods for solving linear and nonlinear inverse problems. Major parts of this book follow papers of the authors. The text can be used for self-study and will be of interest to experts in the field as well as graduate students. The text requires basic knowledge of functional analysis.
    0 references
    0 references
    inverse problem
    0 references
    ill-posed problem
    0 references
    regularization method
    0 references
    Riesz-Dunford formula
    0 references
    Tikhonov regularization
    0 references
    random noise
    0 references
    iterated Tikhonov regularization
    0 references
    ill-posed Cauchy problem
    0 references
    heat equation backwards in time
    0 references
    multistep method
    0 references
    Gauß-Newton method
    0 references
    Newton-Kantorovich method
    0 references
    gradient method
    0 references
    Landweber iteration
    0 references
    abstract Cauchy problem method
    0 references
    regularization
    0 references
    variational inequality
    0 references
    optimization
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references