Error bounds for approximation with neural networks (Q5959036): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Universal approximation bounds for superpositions of a sigmoidal function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Training neural networks with noisy data as an ill-posed problem / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4365433 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4895893 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multilayer feedforward networks are universal approximators / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation properties of a multilayered feedforward artificial neural network / rank
 
Normal rank
Property / cites work
 
Property / cites work: Degree of approximation by neural and translation networks with a single hidden layer / rank
 
Normal rank
Property / cites work
 
Property / cites work: Generalization bounds for function approximation from scattered noisy data / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation by Ridge Functions and Neural Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation by radial basis functions with finitely many centers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4404383 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence rates of certain approximate solutions to Fredholm integral equations of the first kind / rank
 
Normal rank

Latest revision as of 00:00, 4 June 2024

scientific article; zbMATH DE number 1722133
Language Label Description Also known as
English
Error bounds for approximation with neural networks
scientific article; zbMATH DE number 1722133

    Statements

    Error bounds for approximation with neural networks (English)
    0 references
    0 references
    0 references
    28 April 2002
    0 references
    The paper considers neural network approximations with shifts of so-called ridge functions \(\sigma(a_j^Tx+b_j)\). Linear combinations of such ridge functions correspond to a neural network with a single hidden layer. Of particular interest are the approximation orders that are obtainable with these approximations to functions from Sobolev spaces. In the special case when the approximand can be written as a continuous convolution with the kernel used for the approximation, error estimates are given. The error estimates depend on the Sobolev smoothness of the kernel as well as the approximand's smoothness and the number of kernel functions in the neural network. The analysis applies to general kernels from a class that contains ridge functions. Applications of the theoretic results to perceptrons are provided.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    neural networks
    0 references
    approximation order
    0 references
    error bounds
    0 references
    0 references