Almost optimal estimates for approximation and learning by radial basis function networks (Q2251472): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
Added link to MaRDI item.
links / mardi / namelinks / mardi / name
 

Revision as of 07:40, 2 February 2024

scientific article
Language Label Description Also known as
English
Almost optimal estimates for approximation and learning by radial basis function networks
scientific article

    Statements

    Almost optimal estimates for approximation and learning by radial basis function networks (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    14 July 2014
    0 references
    The paper is devoted to approximation of differentiable multivariate functions by the radial basis function networks (RBFN) defined by a formula of the following type: \[ R(x)=\sum_{k=0}^N c_k \sigma (w_k|x - \theta_k|). \] Here \(\sigma\) is the activation function, \(c_k,w_k \in\mathbb R\), \(\theta_k \in\mathbb R^d\). Let \(B^d\) be the unit cube in \(\mathbb R^d\). The authors prove that for any given polynomial \(P\) and sufficiently smooth function \(\sigma\) there exists an RBFN approximating \(P\) arbitrarily closely in \(C(B^d)\). The authors also study machine learning. They prove that using the standard empirical risk minimization, the RBFN can realize an almost optimal learning rate.
    0 references
    radial basis function networks
    0 references
    rate of convergence
    0 references
    approximation of differentiable multivariate functions
    0 references
    machine learning
    0 references
    empirical risk minimization
    0 references

    Identifiers