Hyperbolic cross approximation. Lecture notes given at the courses on constructive approximation and harmonic analysis, Barcelona, Spain, May 30 -- June 3, 2016 (Q1991071)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Hyperbolic cross approximation. Lecture notes given at the courses on constructive approximation and harmonic analysis, Barcelona, Spain, May 30 -- June 3, 2016
scientific article

    Statements

    Hyperbolic cross approximation. Lecture notes given at the courses on constructive approximation and harmonic analysis, Barcelona, Spain, May 30 -- June 3, 2016 (English)
    0 references
    0 references
    0 references
    0 references
    29 October 2018
    0 references
    This is a book about multivariate trigonometric approximation. To go from the univariate to the multivariate case involves several aspects to be considered. First, one has to decide how to define the finite-dimensional spaces of multivariate trigonometric polynomials that will generalize the univariate space $\mathcal{T}_n=\mathrm{span} \{e^{ikx}: |k|\leq n\}$. Here, the authors choose to generalize the frequency set $\{k\in\mathbb{Z}: |k|\leq n\}$ to the hyperbolic cross $\Gamma(N)= \{\mathbf{k}\in\mathbb{Z}^d:\prod_{j=1}^d\max\{|k_j|,1\}\le N\}$ and thus consider approximations from the set of multivariate trigonometric polynomials $\mathcal{T}(N)=\mathrm{span}\{e^{i(\mathbf{k},\mathbf{x})}: \mathbf{k}\in\Gamma(N)\}$. For practical reasons, depending on a level $n$, $\Gamma(N)$ has discrete approximations as a union of hyperrectangles: $Q_n=\cup_{|\mathbf{s}|_1\le n}\,\rho(\mathbf{s})$ where for a given tuple $\mathbf{s}=(s_1,\ldots,s_d)$ of non-negative integers, $\rho(\mathbf{s})=\{\mathbf{k}\in\mathbb{Z}^d: [2^{s_j-1}]\le|k_j|<2^{s_j},~j=1,\ldots,d\}$. \par Next, one has to define the type of functions one wants to approximate. Obvious choices are Sobolev-type of function classes $\mathbf{W}_p^r$ with bounded mixed derivatives ($0< p\le \infty$ refers to Lebesgue spaces $L_p$ and $r$ to the order of the derivatives) or, less common, Besov-type classes $\mathbf{H}_p^r$ and its generalization $\mathbf{B}_{p,\theta}^r$, which are obtained by considering bounded mixed differences ($p$ and $r$ as above and $0<\theta\le \infty$ refers to an $L_\theta$ norm). \par Once the function class $\mathbf{F}$ is decided, then the question is how well functions $f\in\mathbf{F}$ can be approximated. This is measured by $n$-widths which depend on the kind of approximation technique used. For example, $d_n(\mathbf{F},L_p)$ may indicate how well any function from the class $\mathbf{F}$ can be approximated by approximants from $n$-dimensional linear subspaces of $L_p$. To get an idea about the speed of convergence of the methods, it is important to know the asymptotic behaviour of these $n$-widths. Several of these $n$-widths are defined corresponding to an optimum within a class of approximation operators. For example, a linear width is obtained when all possible linear operators are allowed, or one may choose from a restricted set of operators, such as considering only rank $m$ projection operators (orthowidth or Fourier width), or the set may be restricted to recovery operators that use only function values. In some cases, the hyperbolic cross, that is, approximating with trigonometric polynomials from $\mathcal{T}(N)$ or $\mathcal{T}(Q_n)$, turns out to be an optimal choice. \par Entropy numbers measure the ``degree of compactness'' of a set and they are applied to bound for example $n$-widths from below for several function classes $\mathbf{F}$ in $L_q$. \par Sparse approximation is a nonlinear problem in which a best approximation is selected using a combination of elements from a dictionary (for example hyperbolic cross polynomials). Other techniques involve a wavelet-type basis. The construction of cubature formulas to approximate integrals is another type of approximation problem that is considered. Quasi Monte Carlo is an obvious choice, but still the choice of particular lattices need to be considered, or one may consider universal formulas that do not depend on the particular function space like Frolov cubature. If not only the evaluation points but also the weights are optimized, one may end up with formulas for which the sum of weights is not 1, meaning that constant functions are not integrated exactly any more. \par The last two chapters collect miscellaneous problems related to the material discussed like classes of mixed smoothness, direct and inverse theorems, high dimensional approximation and others. \par There is a short appendix that recalls some prerequisite basic facts, and a long list of references that are needed to look up properties and proofs that are not included. This book forms the lecture notes of an advanced one-week course given by V. Temlyakov and T. Ullrich in 2016 in Barcelona. Almost every chapter has a list with several exercises formulated as open problems together with some historical comments and sometimes there are conjectures, yet to be proved. So the book brings the reader up to the state of the art and gives particular suggestions for further research.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    Dirichlet kernel
    0 references
    de la Vallée kernel
    0 references
    hyperbolic cross
    0 references
    \(n\)-width
    0 references
    entropy number
    0 references
    linear approximation
    0 references
    multivariate trigonometric polynomial
    0 references
    interpolation
    0 references
    recovery
    0 references
    cubature
    0 references
    discrepancy
    0 references
    bounded mixed derivative
    0 references
    bounded mixed difference
    0 references
    0 references