Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation (Q6191372): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
Normalize DOI.
 
(2 intermediate revisions by one other user not shown)
Property / DOI
 
Property / DOI: 10.1007/s00211-023-01384-6 / rank
Normal rank
 
Property / cites work
 
Property / cites work: Integral representations of shallow neural network with Rectified Power Unit activation function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Theory of Reproducing Kernels / rank
 
Normal rank
Property / cites work
 
Property / cites work: Numerical solution of inverse problems by weak adversarial networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Universal approximation bounds for superpositions of a sigmoidal function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Inverse problems and high-dimensional estimation. Stats in the Château summer school, Paris, France, August 31 -- September 4, 2009. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4215356 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4895893 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation spaces of deep neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Inverse Problems Light: Numerical Differentiation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls / rank
 
Normal rank
Property / cites work
 
Property / cites work: Complexity estimates based on integral transforms induced by computational units / rank
 
Normal rank
Property / cites work
 
Property / cites work: Better Approximations of High Dimensional Smooth Functions by Deep Neural Networks with Rectified Power Units / rank
 
Normal rank
Property / cites work
 
Property / cites work: Regularization theory for ill-posed problems. Selected topics / rank
 
Normal rank
Property / cites work
 
Property / cites work: Numerical differentiation from a viewpoint of regularization theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Barron space and the flow-induced function spaces for neural network models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal recovery of functions and their derivatives from inaccurate information about the spectrum and inequalities for derivatives / rank
 
Normal rank
Property / cites work
 
Property / cites work: High-order approximation rates for shallow neural networks with cosine and \(\mathrm{ReLU}^k\) activation functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Characterization of the variation spaces corresponding to shallow neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3996207 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A numerical differentiation method and its application to reconstruction of discontinuity / rank
 
Normal rank
Property / cites work
 
Property / cites work: Representation formulas and pointwise properties for Barron functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Finite Neuron Method and Convergence Analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error bounds for approximations with deep ReLU networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Universality of deep convolutional neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1007/S00211-023-01384-6 / rank
 
Normal rank

Latest revision as of 19:18, 30 December 2024

scientific article; zbMATH DE number 7802489
Language Label Description Also known as
English
Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation
scientific article; zbMATH DE number 7802489

    Statements

    Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    9 February 2024
    0 references
    The paper studies the approximation of a function and its derivatives using a two-layer neural network. The activation function is chosen to be a rectified power unit \[ \mbox{ReLU}^k(x)=(\max(0,x))^k \] so that the approximation of derivatives is possible. By investigating the corresponding Barron space, the authors show that two-layer networks with the \(\mbox{ReLU}^k\) activation function are able to simultaneously approximate an unknown function and its derivatives. A Tikhonov type regularization method is proposed to fulfill the purpose. Error bounds are established and several numerical examples to support the efficiency of the proposed approach are provided.
    0 references
    ReLU neural networks
    0 references
    Tikhonov regularization
    0 references
    Barron spaces
    0 references
    derivative approximation
    0 references
    0 references
    0 references
    0 references

    Identifiers