Almost optimal estimates for approximation and learning by radial basis function networks (Q2251472): Difference between revisions
From MaRDI portal
Removed claims |
Changed an Item |
||
Property / author | |||
Property / author: Shao-Bo Lin / rank | |||
Normal rank | |||
Property / author | |||
Property / author: Zong Ben Xu / rank | |||
Normal rank | |||
Property / reviewed by | |||
Property / reviewed by: Yu. I. Makovoz / rank | |||
Normal rank |
Revision as of 13:09, 15 February 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Almost optimal estimates for approximation and learning by radial basis function networks |
scientific article |
Statements
Almost optimal estimates for approximation and learning by radial basis function networks (English)
0 references
14 July 2014
0 references
The paper is devoted to approximation of differentiable multivariate functions by the radial basis function networks (RBFN) defined by a formula of the following type: \[ R(x)=\sum_{k=0}^N c_k \sigma (w_k|x - \theta_k|). \] Here \(\sigma\) is the activation function, \(c_k,w_k \in\mathbb R\), \(\theta_k \in\mathbb R^d\). Let \(B^d\) be the unit cube in \(\mathbb R^d\). The authors prove that for any given polynomial \(P\) and sufficiently smooth function \(\sigma\) there exists an RBFN approximating \(P\) arbitrarily closely in \(C(B^d)\). The authors also study machine learning. They prove that using the standard empirical risk minimization, the RBFN can realize an almost optimal learning rate.
0 references
radial basis function networks
0 references
rate of convergence
0 references
approximation of differentiable multivariate functions
0 references
machine learning
0 references
empirical risk minimization
0 references