Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation (Q6191372): Difference between revisions
From MaRDI portal
ReferenceBot (talk | contribs) Changed an Item |
Normalize DOI. |
||
(One intermediate revision by one other user not shown) | |||
Property / DOI | |||
Property / DOI: 10.1007/s00211-023-01384-6 / rank | |||
Property / cites work | |||
Property / cites work: Sharp Bounds on the Approximation Rates, Metric Entropy, and n-Widths of Shallow Neural Networks / rank | |||
Normal rank | |||
Property / DOI | |||
Property / DOI: 10.1007/S00211-023-01384-6 / rank | |||
Normal rank |
Latest revision as of 19:18, 30 December 2024
scientific article; zbMATH DE number 7802489
Language | Label | Description | Also known as |
---|---|---|---|
English | Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation |
scientific article; zbMATH DE number 7802489 |
Statements
Two-layer networks with the \(\text{ReLU}^k\) activation function: Barron spaces and derivative approximation (English)
0 references
9 February 2024
0 references
The paper studies the approximation of a function and its derivatives using a two-layer neural network. The activation function is chosen to be a rectified power unit \[ \mbox{ReLU}^k(x)=(\max(0,x))^k \] so that the approximation of derivatives is possible. By investigating the corresponding Barron space, the authors show that two-layer networks with the \(\mbox{ReLU}^k\) activation function are able to simultaneously approximate an unknown function and its derivatives. A Tikhonov type regularization method is proposed to fulfill the purpose. Error bounds are established and several numerical examples to support the efficiency of the proposed approach are provided.
0 references
ReLU neural networks
0 references
Tikhonov regularization
0 references
Barron spaces
0 references
derivative approximation
0 references
0 references
0 references
0 references
0 references