Kernel-based interpolation at approximate Fekete points (Q2021778)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Kernel-based interpolation at approximate Fekete points
scientific article

    Statements

    Kernel-based interpolation at approximate Fekete points (English)
    0 references
    0 references
    0 references
    0 references
    27 April 2021
    0 references
    This is a paper with corrections of certain formulae in the earlier paper that can be summarised as follows: The interpolation with kernel functions and in particular using radial basis function kernel functions is a very good strategy to obtain useful, accurate approximations to multivariable data and functions. Examples of suitable radial basis functions include the so-called Gaussian kernel and the famous multiquadric function. This works in any dimension, and convergence results within ``native spaces'' are available. As the authors note, these features make radial basis function interpolation very attractive. A caveat when using these approximations is that the condition numbers of the Gram (collocation, interpolation) matrices can be very high. The high condition numbers are usually dealt with by maximising determinants of collocation (interpolation) matrices by choosing well-adjusted centres. This is because these condition numbers not only depend on the choice of the radial basis function, but also on the choices of the interpolation points. There are so-called Fekete points and other choices, which are considered in the present paper, that minimise these condition numbers. In this article, approximations to Fekete points are computed, and the results are presented within the context of approximation (error) estimates in the Chebyshev norm and numerical examples for the Gauss kernel. The error estimates use the standard methods with power function estimates. These improvements are possible for the special case \(\phi(r)=\exp(-c^2x^2)\), because the expansions of the power functions are explicitly computed. Although the interesting special case of Gauss kernels is only carried out in the one-dimensional setting, the authors generalise this method by tensor-product formulations of multivariable approximations. The approximation of the Fekete points is understood in the way the kernels are used: to find the approximation points, expansions of the (radial basis function) kernels are carried out, and the approximations of the points are made by truncating these (orthogonal) expansions. The orthogonality with respect to which those expansions take place are defined via inner products within the reproducing kernel Hilbert space (native space). In the special case of the univariate Gauss kernel, the computation of those points can be interpreted as solving a convex optimisation problem.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    reproducing kernel Hilbert spaces
    0 references
    Gaussian kernel
    0 references
    radial basis functions
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references