Almost optimal estimates for approximation and learning by radial basis function networks (Q2251472): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Removed claims
Import241208061232 (talk | contribs)
Normalize DOI.
 
(4 intermediate revisions by 4 users not shown)
Property / DOI
 
Property / DOI: 10.1007/s10994-013-5406-z / rank
Normal rank
 
Property / author
 
Property / author: Shao-Bo Lin / rank
 
Normal rank
Property / author
 
Property / author: Zong Ben Xu / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: Yu. I. Makovoz / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s10994-013-5406-z / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2110456244 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Universal approximation bounds for superpositions of a sigmoidal function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation and learning by greedy algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4550961 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On quasi-interpolation by radial basis functions with scattered centres / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error bounds for approximation with neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal rates for the regularized least-squares algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the mathematical foundations of learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Best choices for regularization parameters in learning theory: on the bias-variance problem. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4273944 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation methods for supervised learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Improved multiquadric method for elliptic partial differential equations via PDE collocation on the boundary / rank
 
Normal rank
Property / cites work
 
Property / cites work: A radial basis function method for the shallow water equations on a sphere / rank
 
Normal rank
Property / cites work
 
Property / cites work: A distribution-free theory of nonparametric regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: A bound on the approximation order of surface splines / rank
 
Normal rank
Property / cites work
 
Property / cites work: Dependence of Computational Models on Input Dimension: Tractability of Approximation and Optimization Tasks / rank
 
Normal rank
Property / cites work
 
Property / cites work: The essential rate of approximation for radial function manifold / rank
 
Normal rank
Property / cites work
 
Property / cites work: Essential rate for approximation by spherical neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: On best approximation of classes by radial functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: On lower bounds in radial basis approximation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Pseudo-dimension and entropy of manifolds formed by affine-invariant dictionary / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation by neural networks and learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Lower bounds for multivariate approximation by affine-invariant dictionaries / rank
 
Normal rank
Property / cites work
 
Property / cites work: Lower bounds for approximation by MLP neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Entropy and the combinatorial dimension / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the tractability of multivariate integration and approximation by neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4026010 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error estimates and condition numbers for radial basis function interpolation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation by radial basis functions with finitely many centers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2752687 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Scattered Data Approximation / rank
 
Normal rank
Property / cites work
 
Property / cites work: The rate of approximation of Gaussian radial basis neural networks in continuous function space / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal rate of the regularized regression learning algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation with polynomial kernels and SVM classifiers / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1007/S10994-013-5406-Z / rank
 
Normal rank

Latest revision as of 16:32, 17 December 2024

scientific article
Language Label Description Also known as
English
Almost optimal estimates for approximation and learning by radial basis function networks
scientific article

    Statements

    Almost optimal estimates for approximation and learning by radial basis function networks (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    14 July 2014
    0 references
    The paper is devoted to approximation of differentiable multivariate functions by the radial basis function networks (RBFN) defined by a formula of the following type: \[ R(x)=\sum_{k=0}^N c_k \sigma (w_k|x - \theta_k|). \] Here \(\sigma\) is the activation function, \(c_k,w_k \in\mathbb R\), \(\theta_k \in\mathbb R^d\). Let \(B^d\) be the unit cube in \(\mathbb R^d\). The authors prove that for any given polynomial \(P\) and sufficiently smooth function \(\sigma\) there exists an RBFN approximating \(P\) arbitrarily closely in \(C(B^d)\). The authors also study machine learning. They prove that using the standard empirical risk minimization, the RBFN can realize an almost optimal learning rate.
    0 references
    radial basis function networks
    0 references
    rate of convergence
    0 references
    approximation of differentiable multivariate functions
    0 references
    machine learning
    0 references
    empirical risk minimization
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers