On average case complexity of linear problems with noisy information (Q2638754)

From MaRDI portal
Revision as of 07:56, 5 March 2024 by Import240304020342 (talk | contribs) (Set profile property.)
scientific article
Language Label Description Also known as
English
On average case complexity of linear problems with noisy information
scientific article

    Statements

    On average case complexity of linear problems with noisy information (English)
    0 references
    1990
    0 references
    Let F be a Banach space and \(\mu\) be a Gaussian Borel measure on F with a covariance operator \(C_{\mu}\). For a functional \(L\in F^*\), define \(\| L\|_{\mu}=\sqrt{LC_{\mu}L}\). Let S: \(F\to G\) be a continuous linear operator with values in a Hilbert space G. A choice of n functionals \(L_ 1,...,L_ n\) from \(F^*\) defines an information operator N: \(F\to {\mathbb{R}}^ n\), \(Nf=(L_ 1f,...,L_ nf)\). An algorithm is a mapping \(\phi: {\mathbb{R}}^ n\to G\). The quantity \(e_ 0(\phi,N)=(\int_{F}\| Sf-\phi (Nf)\|^ 2\mu (df))^{1/2}\) is called an average error. However, in the presence of a random noise, an information takes the form of an \({\mathbb{R}}^ n\)-valued Gaussian random variable \(Z_{\delta}=(z_ 1,...,z_ n)\) with independent components whose means are equal to \(L_ if\) and variances depend on \(\mu\)-norms of functionals \(L_ i\), \(\sigma^ 2_ i=\delta \| L_ i\|^ 2_{\mu}\), i.e., \(z_ i=L_ if+\sqrt{\delta}\| L_ i\|_{\mu}x_ i\), where \((x_ i)\) is a discrete white noise. The corresponding quadratic mean error \(e_{\delta}(\phi,N)\) is obtained by averaging the error \(e_ 0\) with respect to the distribution of \(Z_{\delta}.\) The optimal algorithm is the one that minimizes the error, whose corresponding value is called the average radius of N. The nth optimal information is the one, among all N's with rank n, that minimizes the average radius, and the corresponding value is called the nth optimal average radius. Explicit formulas for the optimal algorithm, nth optimal radius and information are given. With an information N and an algorithm \(\phi\) one can associate a quantity cost(\(\phi\),N), related to the total number of functional evaluations. The average \(\epsilon\)-complexity is understood as the least cost for which the average error does not exceed \(\epsilon\). Similarly, one can define the average \(\epsilon\)-cardinality. The average \(\epsilon\)-complexity has given tight upper and lower bounds which implies, in particular that it converges to \(\infty\) at least as fast as \(\epsilon^{-2}.\) The case of the adaptive information, i.e., when \(z_ i=L_ i(f;z_ 1,...,z_{i-1})+\sqrt{\delta}\| L_ i\|_{\mu}x_ i\), where \((x_ i)\) is a discrete white noise and the rank n of the associated information \(N^ a\) is randomly scattered, is fairly more complex yet the performance of approximation does not surpass that in the nonadaptive case. The obtained results are applied to the approximation problem in finite dimensional spaces.
    0 references
    information-based complexity
    0 references
    noisy information
    0 references
    Banach space
    0 references
    Gaussian Borel measure
    0 references
    Hilbert space
    0 references
    information operator
    0 references
    average error
    0 references
    discrete white noise
    0 references
    optimal algorithm
    0 references
    optimal average radius
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references