Asymptotic optimality of generalized cross-validation for choosing the regularization parameter (Q1326436): Difference between revisions
From MaRDI portal
Latest revision as of 10:34, 30 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Asymptotic optimality of generalized cross-validation for choosing the regularization parameter |
scientific article |
Statements
Asymptotic optimality of generalized cross-validation for choosing the regularization parameter (English)
0 references
7 July 1994
0 references
Let \(f_{n \lambda}\) be the regularized solution of a general, linear operator equation, \(Kf_ 0=g\), from discrete, noisy data \(y_ i=g(x_ i)+\varepsilon_ i\), \(i=1,\dots,n\), where \(\varepsilon_ i\) are uncorrelated random errors. We consider the prominent method of generalized cross-validation (GCV) for choosing the crucial regularization parameter \(\lambda\). The practical GCV estimate \(\hat \lambda_ V\) and its ``expected'' counterpart \(\lambda_ V\) are defined as the minimizers of the GCV functions \(V(\lambda)\) and \(EV (\lambda)\), respectively, where \(E\) denotes expectation. We investigate the asymptotic performance of \(\lambda_ V\) with respect to each of the following loss functions: the risk, an \(L^ 2\)-norm on the output error \(Kf_{n \lambda}-g\), and a whole class of stronger norms on the input error \(f_{n \lambda}-f_ 0\). In the special cases of data smoothing and Fourier differentiation, it is known that as \(n \to \infty\), \(\lambda_ V\) is asymptotically optimal (ao) with respect to the risk criterion. We show this to be true in general, and also extend it to the \(L^ 2\)-norm criterion. The asymptotic optimality is independent of the error variance, the ill- posedness of the problem and the smoothness index of the solution \(f_ 0\). For the input error criterion, it is shown that \(\lambda_ V\) is weakly ao for a certain class of \(f_ 0\) if the smoothness of \(f_ 0\) relative to the regularization space is not too high, but otherwise \(\lambda_ V\) is sub optimal. This result is illustrated in the case of numerical differentiation.
0 references
regularization method
0 references
ill-posed problem
0 references
method of generalized cross- validation
0 references
regularization parameter
0 references
data smoothing
0 references
Fourier differentiation
0 references
numerical differentiation
0 references
0 references
0 references
0 references
0 references
0 references
0 references