Optimal approximation using complex-valued neural networks

From MaRDI portal
Publication:6431310




Abstract: We prove a quantitative result for the approximation of functions of regularity Ck (in the sense of real variables) defined on the complex cube Omegan:=[1,1]n+i[1,1]nsubseteqmathbbCn using shallow complex-valued neural networks. Precisely, we consider neural networks with a single hidden layer and m neurons, i.e., networks of the form and show that one can approximate every function in Ckleft(Omegan;mathbbCight) using a function of that form with error of the order mk/(2n) as moinfty, provided that the activation function phi:mathbbComathbbC is smooth but not polyharmonic on some non-empty open set. Furthermore, we show that the selection of the weights sigmaj,bjinmathbbC and hojinmathbbCn is continuous with respect to f and prove that the derived rate of approximation is optimal under this continuity assumption. We also discuss the optimality of the result for a possibly discontinuous choice of the weights.











This page was built for publication: Optimal approximation using complex-valued neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6431310)