Optimal approximation using complex-valued neural networks
From MaRDI portal
Publication:6431310
Abstract: We prove a quantitative result for the approximation of functions of regularity (in the sense of real variables) defined on the complex cube using shallow complex-valued neural networks. Precisely, we consider neural networks with a single hidden layer and neurons, i.e., networks of the form and show that one can approximate every function in using a function of that form with error of the order as , provided that the activation function is smooth but not polyharmonic on some non-empty open set. Furthermore, we show that the selection of the weights and is continuous with respect to and prove that the derived rate of approximation is optimal under this continuity assumption. We also discuss the optimality of the result for a possibly discontinuous choice of the weights.
This page was built for publication: Optimal approximation using complex-valued neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6431310)