Training neural networks with noisy data as an ill-posed problem (Q5934295)
From MaRDI portal
scientific article; zbMATH DE number 1606608
Language | Label | Description | Also known as |
---|---|---|---|
English | Training neural networks with noisy data as an ill-posed problem |
scientific article; zbMATH DE number 1606608 |
Statements
Training neural networks with noisy data as an ill-posed problem (English)
0 references
19 June 2001
0 references
Let \(H^s(\Omega)\) be the Sobolev space of order \(s\) on a bounded domain \(\Omega \subset {\mathbb R}^d\). The authors study approximations of a given function \(f \in H^s(\Omega)\) by a neural network \[ f_n(x)=\sum_{j=1}^n c_j \phi(x;t_j) \] with a given activation function \(\phi\). They focus on the stability of such approximations in the case when a noisy observation \(f^\delta\) is known instead of \(f\) for which \(\| f-f^\delta\| _{L^2(\Omega)}<\delta\). It follows from some general considerations that finding best approximations \(f_n\) is an ill-posed problem to which the authors apply regularization techniques. They show that training a neural network (1) is equivalent to the least squares collocation for a certain corresponding integral equation. Using this equivalence, they find rates of convergence \(f_n \to f\) for exact data and show that \(f^\delta_n\) approaches \(f\) if \(n=n(\delta)\) is appropriately chosen.
0 references
ill-posed problems
0 references
least square collocation
0 references
neural networks
0 references