The information-based complexity of approximation problem by adaptive Monte Carlo methods (Q2519328): Difference between revisions
From MaRDI portal
Removed claim: author (P16): Item:Q383648 |
Changed an Item |
||
Property / author | |||
Property / author: Gen Sun Fang / rank | |||
Normal rank |
Revision as of 11:07, 14 February 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | The information-based complexity of approximation problem by adaptive Monte Carlo methods |
scientific article |
Statements
The information-based complexity of approximation problem by adaptive Monte Carlo methods (English)
0 references
26 January 2009
0 references
A basic problem of information based complexity theory is to determine the asymptotic degree of the complexity on various numerical problems and to give optimal algorithms in different computational settings. The purpose of the present paper is to study the complexity of the approximation problem on a multivariate Sobolev space with bounded mixed derivatives \(MW_{p,\alpha}^{r}({\mathbb T}^{d})\) over the torus \({\mathbb T}^{d}=[0,2 \pi)^{d}\) and \(1 < p < \infty\) by adaptive Monte Carlo methods. Applying the discretization techniques and some properties of a pseudo-\(s\)-scale, the exact asymptotic order of this problem is obtained. In the paper \({\mathcal N}^{n}(X)\) denotes the set of all adaptive linear informations of cardinality \(n\) and \(X\) is a Banach space, \(\Phi^{n}(Y)\) with a Banach space \(Y\) is the set of all not necessary linear and continuous mappings from \({\mathbb R}^{n}\) to \(Y\) and \(L(X,Y)\) is the set of all bounded linear operators from \(X\) to \(Y.\) In Section 2 some necessary definitions and notations are introduced. The notations of the \(n\)-th width in the sense of Kolmogorov and the \(n\)-th width in a sense of Gel'fand are reminded. A given operator \(S \in L(X,Y)\) is approximated by mappings of the form \(\varphi \circ N,\) where \(N \in {\mathcal N}^{n}(X)\) and \(\varphi \in \Phi^{n}(Y).\) The notion of error \(e(S,N, \varphi)\) of the method \((N, \varphi)\) is defined. In Definition 3 the concept of the abstract adaptive Monte Carlo method is presented. The notions of error \(e(S, M,X,Y)\) of the adaptive Monte Carlo method \(M\) and an \(n\)-th adaptive Gel'fand Monte Carlo randomized error \(e_{n}^{\text{MC}}(S,X,Y)\) are defined. The concept of the multivariate Sobolev space of functions with bounded mixed derivavatives \(MW_{p, \alpha}^{r}({\mathbb T}^{d})\) is reminded. Here \(I\) denotes the Sobolev embedding from \(MW_{p, \alpha}^{r}\) to the Lebesgue space \(L_{p}.\) In Theorem 1 the exact order of the error \(e_{n}^{\text{MC}}(I,MW_{p, \alpha}^{r},L_{p})\) is obtained. The randomized approximation rates obtained in Theorem 1 are comparated with the deterministic approximation rate, presented in Theorem 2. After randomization a better rates are obtained for \(p <q\) and \(2 \leq q < \infty\) and the biggest gap can reach a factor \((n^{-1} \cdot \log^{\nu-1} n)^{1 \over 2}.\) After that the randomized approximation rates obtained in Theorem 1 are compared with the result on the linear randomized approximation presented in Theorem 3. The adaptive nonlinear randomized methods provide better rates than those of linear randomized methods for \(p < q\) and \(1 < p< 2.\) In Section 3 the result of Theorem 1 is proved. The discretization technique due to \textit{V. E. Maiorov} [Usp. Mat. Nauk 30, No. 6(186), 179--180 (1975; Zbl 0319.46024)], which is based on the reduction of the approximation problem of Sobolev embedding is used. Many lemmas are proved to obtain the upper and lower estimations of the \(n\)-th Gel'fand Monte Carlo errors on finite dimensional sequences spaces.
0 references
adaptive Monte Carlo method
0 references
Sobolev spaces with bounded mixed derivatives
0 references
complexity of the approximation problem
0 references
asymptotic order of the errors
0 references
discretization technique
0 references