Bases in function spaces, sampling, discrepancy, numerical integration (Q974431)

From MaRDI portal
Revision as of 02:35, 8 July 2023 by Importer (talk | contribs) (‎Created a new Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Bases in function spaces, sampling, discrepancy, numerical integration
scientific article

    Statements

    Bases in function spaces, sampling, discrepancy, numerical integration (English)
    0 references
    0 references
    3 June 2010
    0 references
    This monograph studies (optimal) sampling and numerical integration in the multivariate case; precisely, if \(f: \Omega\to {\mathbb C}\) is a continuous function on some domain belonging to some function space \(X\), then approximation in some quasi-normed space \(Y\), or integration of \(f\) shall be based on point evaluations at \(x_{1},\dots,x_{k}\in\Omega\). Any reconstruction of \(f\) which is based on these points can be written as \(S_{k}(f)= \phi(f(x_{1}),\dots,f(x_{k}))\), where \(\phi: {\mathbb C}^{k}\to Y\) may be linear or arbitrary. Similarly, any quadrature (integration formula) may be written as \(I_{k}(f)= \sum_{j=1}^{k}a_{j}f(x_{j})\). Thus the following quantities describe optimality of approximation and integration, respectively. If we assume that the canonical embedding \(\text{id}: X\hookrightarrow Y\) is continuous, then the \textit{sampling numbers} are given as \[ g_{k}(\text{id}) := \inf\left\{\sup\left\{\| f- S_{k}(f) \|_{Y},\;f\in B_{X}\right\},\;\phi, \;x_{1},\dots,x_{k}\right\}, \] where the supremum is taken over the unit ball \(B_{X}\subset X\), and the infimum is taken over all choices of at most \(k\) points in \(\Omega\), and reconstructions \(\phi\). Similarly, one may study the integral numbers \[ \text{Int}_{k}(X):= \inf\left\{\sup\left\{\biggl| \int_{\Omega}f - I_{k}(f) \Biggr|,\;f\in B_{X}\right\},\;a_{1},\dots,a_{k}, \;x_{1},\dots,x_{k}\right\}, \] again, the infimum is over all quadratures using at most \(k\) points. The systematic study of these quantities started only recently, and an early reference is \textit{E.\,Novak} and \textit{H.\,Triebel} [``Function spaces in Lipschitz domains and optimal rates of convergence for sampling'', Constructive Approximation 23, No.\,3, 325--350 (2006; Zbl 1106.41014)]. As it turns out, spaces of continuous functions which are given in terms of expansions along the \textit{Faber system}, which is the integrated and hence continuous version of the Haar system, play a crucial role. Accordingly, the major portion of this volume, pp.\,1--173, is concerned with a description and properties of the approximating bases and the related spaces, Chapter~1 (Function spaces), 2~(Haar bases), and~3 (Faber bases). These spaces reflect finite smoothness, which is measured in Besov and Sobolev-type norms. In the \textit{isotropic} case, the problem is well settled, and the emphasis is on classes of \textit{anisotropic} spaces, as, for example, spaces with dominating mixed smoothness. The asymptotic behavior of the sampling and integral numbers, as \(k\to\infty\), is determined for a variety of spaces in Chapters 4 (Sampling) and~5 (Numerical integration). Prototypically, the decay of these quantities is of the form \(k^{-\sigma}(\log k)^{\mu}\). In many cases, the power \(\sigma\) is sharp, while the exact power \(\mu\) for the logarithmic contribution is not known. The author highlights various aspects, such as properties of the domain, but also the impact of the smoothness. The understanding of the sampling and integral numbers is relevant for several applications. On the one hand, this has impact in the context of \textit{information-based complexity} when comparing best possible (in the sense of \textit{approximation, Kolmogorov, Gelfand} numbers, etc.); a classical treatise on such problems is [\textit{J.\,F.\thinspace Traub, G.\,W.\thinspace Wasilkowski} and \textit{H.\,Woźniakowski}, ``Information-based complexity'' (Boston:\ Academic Press) (1988; Zbl 0654.94004)]. However, this problem is also of relevance in the context of \textit{quasi-Monte Carlo integration}, when the \(k\)-point quadrature formula has equal weights~\(a_{j}=1/k,\;j=1,\dots,k\), and hence the quality of the quadrature formula is entirely determined by the location of the nodes~\(x_{1},\dots,x_{k}\). Designing appropriate point sets is the objective of \textit{discrepancy theory}, the theory of distributions of points on the \(n\)-dimensional unit cube \( Q^{n}\). The quality of a point set \(\Gamma\) is given by the discrepancy function (local discrepancy) \[ \text{disc}_{\Gamma}(x):= \text{vol}(R_{x}) - \#\{j: x_{j}\in R_{x}\}/{k},\quad x\in Q^{n}, \] where \(\Gamma\) is any point set consisting of \(k\) points, and \(R_{x}\) is the rectangle anchored a 0 with upper right corner~\(x\). This function measures the deviation of the fraction of points in the rectangle \(R_{x}\) from its volume. The minimal (in the \(L_{p}\)-sense) discrepancy of \(k\) points is \[ \text{disc}_{k}^{\ast}(L_{p}(Q^{n})) := \inf\{ \| \text{disc}_{\Gamma}\|_{L_{p}(Q^{n})},\;\# \Gamma \leq k\}, \] where the infimum is taken over all possible choices of \(k\) points (more general quantities are considered in the monograph). Within the classical context of quasi-Monte Carlo integration (\(p=\infty\)), the \textit{Koksma-Hlawka inequality} is used to bound the integration error in terms of the variation of the integrand and \(\text{disc}_{k}^{\ast}(L_{\infty}(Q^{n}))\), we refer to [\textit{H.\,Niederreiter}, ``Random number generation and quasi-Monte Carlo methods'' (CBMS-NSF Regional Conference Series in Applied Mathematics 63; Philadelphia:\ SIAM) (1992; Zbl 0761.65002)]. In Chapter~6 (Discrepancy), the author takes a more general point of view, and within this context, a general concept of discrepancy numbers proves important. These appear to be dual to the corresponding integral numbers. This paves the way to establish the asymptotics of the discrepancy numbers by using the results for the integral numbers, greatly extending previously known and famous results in this context. An earlier account on discrepancy theory from the perspective of function spaces is [\textit{V.\,N.\thinspace Temlyakov}, ``Cubature formulas, discrepancy, and nonlinear approximation'', J.\ Complexity 19, No.\,3, 352--391 (2003; Zbl 1031.41016)]. It is interesting to mention that to determine the exact asymptotic behavior of the discrepancy as a function of \(k\) and \(n\) is a challenging problem, and it is still open for the most relevant case \(p=\infty\). The outline is concise, yet contains a lot of historical remarks. Since the emphasis is on the asymptotics of the above quantities, the reader will not find details on the construction of best sampling or integration points, or points with low discrepancy. However, references to relevant publications are always given. To really appreciate this study, the reader should consult the relevant texts for the respective sections. A quick account of the problem under consideration can be found in the author's recent paper [\textit{H.\,Triebel}, ``Numerical integration and discrepancy, a new approach'',\ Math.\ Nachr.\ 283, No.\,1, 139--159 (2010; Zbl 1187.41015)].
    0 references
    Faber system
    0 references
    Sobolev-Besov space
    0 references
    sampling
    0 references
    numerical integration
    0 references
    discrepancy
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references