Linearly constrained reconstruction of functions by kernels with applications to machine learning (Q2498389): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
Set OpenAlex properties.
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s10444-004-7616-1 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2023409392 / rank
 
Normal rank

Revision as of 00:25, 20 March 2024

scientific article
Language Label Description Also known as
English
Linearly constrained reconstruction of functions by kernels with applications to machine learning
scientific article

    Statements

    Linearly constrained reconstruction of functions by kernels with applications to machine learning (English)
    0 references
    0 references
    16 August 2006
    0 references
    A particular interesting example of reconstruction by kernels is interpolation with positive definite radial basis functions. The radial basis functions interpolant can be interpreted as the function minimizing a Hilbert space norm of all functions which interpolate the given data. The norm is the native space norm corresponding to the basis function (compare with spline interpolation where the norm is the \(L^2\)-norm of the second derivative). In this paper the interpolation condition is replaced by a uniform error bound \(| s(x_j)-f_j| \leq \eta\) at the data sites \(\{x_j\}_{j=1}^N\subset\mathbb{R}^d\) (the corresponding inequality for functions is also considered). The authors prove that there exists a unique minimal norm solution which satisfies this condition. The function has a particular form: Define the contact set \(Y=\{y\in X: | s(x)-f(x)| =\eta\}\), then \(s\in \text{span}\{K(\cdot,y):y\in Y\}\), where \(K(\cdot,y)\) is the reproducing kernel for the Hilbert space. In addition, some conditions for optimality are proved. There is a trade-off between accurate solutions for small \(\eta\) and more simple solutions for higher values of \(\eta\). This motivates the study of a case where \(\eta\) is not constant but included in the optimization problem. The authors also present an iterative greedy algorithm to find the optimal approximating function. It is based on some monotonicity results and quadratic programming techniques. Connections to machine learning and some results from numerical experiments are also presented.
    0 references
    positive definite radial basis functions
    0 references
    quadratic programming
    0 references
    support vectors
    0 references
    vector machines
    0 references
    regression
    0 references
    reproducing kernel
    0 references
    Hilbert space
    0 references
    greedy algorithm
    0 references
    numerical experiments
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references