An approximation by neural networks with a fixed weight (Q1767906): Difference between revisions
From MaRDI portal
Set profile property. |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1016/j.camwa.2003.06.008 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W1970705963 / rank | |||
Normal rank |
Revision as of 23:11, 19 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | An approximation by neural networks with a fixed weight |
scientific article |
Statements
An approximation by neural networks with a fixed weight (English)
0 references
8 March 2005
0 references
A function \(\sigma: {\mathbb R} \to {\mathbb R}\) is called sigmoidal if \(\lim_ {x\to -\infty} \sigma(x)=0\), \(\lim_ {x\to +\infty} \sigma(x)=1\). Theorem 1. Let \(\sigma\) be a bounded sigmoidal function on \({\mathbb R}\) and let \(\epsilon >0\) be given. If \(f\) is a continuous function on \({\mathbb R}\) such that \(\lim_{| x| \to \infty} f(x)=0\), then there exist constants \(b_i, c_i \in {\mathbb R}\) and positive integers \(N,K\) such that \[ \left | f(x)-\sum_{i=1}^N c_i \sigma (Kx+b_i)\right | <\epsilon, \qquad x \in {\mathbb R}. \] Another similar theorem is also proved and some numerical examples are given.
0 references
neural network
0 references
sigmoidal function
0 references
convolution
0 references