Intelligent systems II. Complete approximation by neural network operators (Q2352904)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Intelligent systems II. Complete approximation by neural network operators
scientific article

    Statements

    Intelligent systems II. Complete approximation by neural network operators (English)
    0 references
    6 July 2015
    0 references
    Approximation by means of artificial neural networks has been a central topic in the research work of the author. His earlier results have been gathered in the monograph [Intelligent systems. Approximation by artificial neural networks. Berlin: Springer (2011; Zbl 1243.68002)]. The present book is intended as a continuation of this monograph and contains mainly the research results of the author in recent years. To approximate functions, the author uses almost the same strategy. Inspired by the way Cardaliaguet-Euvrard operators have been defined (see the paper [\textit{P. Cardaliaguet} and \textit{G. Euvrard}, ``Approximation of a function and its derivative with a neural network'', Neural Netw. 5, No. 2, 207--220 (1992; \url{doi:10.1016/S0893-6080(05)80020-6})]), the author introduces the ``normalized neural network operators''. They are constructed by normalizing some special finite expansions of the involved function \(f(x)\) in terms of a specific basis function. The normalized network operator generates an output from the neural network that is interpreted as an approximation for the involved function \(f(x)\) and one evaluates the difference to \(f(x)\). Further, the aim is to investigate the convergence and prove additional inequalities in order to find convergence rates. If for instance bell-shaped or squashing functions are chosen as basis functions, then the ``normalized bell'' or ``normalized squashing'' neural network operators are generated. In the first two chapters of the book, results on convergence and rates of convergence in the univariate as well as in the multivariate case for approximations with normalized bell and normalized squashing-type neural network operators are shown. In the third chapter, new basis functions are constructed from the sigmoid respectively hyperbolic tangent functions. The corresponding operators are the quasi-interpolation sigmoidal and quasi-interpolation hyperbolic tangent neural network operators. Some Jackson-type inequalities involving the moduli of continuity of the right and left Caputo fractional derivatives of the function to be approximated are proved. In Chapter 4 the network operators are constructed directly from the Cardaliaguet-Euvrard operators with bell-shaped functions as well as squashing functions as basis. The error evaluation involves the right and left Caputo fractional derivatives. In Chapter 5 the network operators are again the quasi-interpolation sigmoidal and quasi-interpolation hyperbolic tangent neural network operators. For the univariate case, fractional Voronovskaya-type asymptotic expansions for the error of approximation are proved. Multivariate Voronovskaya-type asymptotic expansions are discussed in Chapter 6. In Chapter 7 one focuses again on normalized bell and squashing-type operators. While evaluating the approximation error, bounds involving the moduli of continuity of the right and left Caputo fractional derivatives of the function to be approximated are obtained. In Chapter 8, for normalized bell and squashing-type neural network operators, Voronovskaya-type asymptotic expansions for the error of approximation of these operators to the unit operator are proved. The multivariate case is the subject of Chapter 9. Chapters 10 to 15 are devoted to the investigation of approximations by means of fuzzy-random normalized neural network operators. In Chapter 10 multivariate fuzzy-random normalized neural network operators of Cardaliaguet-Euvrard and squashing type are considered. The convergence rates are given from the multivariate probabilistic Jackson-type inequalities and involve multivariate fuzzy-random modulus of continuity of the function to be approximated and its fuzzy partial derivatives. In Chapter 11, univariate fuzzy normalized bell and squashing-type operators are considered. The associate fuzzy Jackson-type inequalities involve the fuzzy moduli of continuity of the right and left fuzzy Caputo fractional derivatives of the function to be approximated. In Chapter 12 rates of convergence for approximations with univariate quasi-interpolation sigmoidal and hyperbolic tangent fuzzy neural network operators are investigated. In Chapter 13 multivariate fuzzy approximations by means of fuzzy normalized bell and squashing-type neural network operators are studied. Chapter 14 is devoted to higher-order multivariate fuzzy approximation using quasi-interpolation neural network operators, and Chapter 15 is devoted to the study of multivariate pointwise and uniform convergence in the q-mean of the multivariate fuzzy-random quasi-interpolation neural network operators to the fuzzy-random unit operator. In Chapter 16 multivariate approximations with the Kantorovich-type neural network operators are considered. In Chapter 17 the basis function is constructed from the error function erf. Real and complex-valued functions are then approximated by means of quasi-interpolation, Baskakov-type and quadrature-type neural network operators. Fractional neural network approximations are considered as well. The extension of these results to the multivariate case is done in Chapter 18. Voronovskaya-type asymptotic expansions for the approximation with error-function-based quasi-interpolation neural networks are presented in Chapter 19. Chapters 20 and 21 contain extensions of the results presented in Chapters 17 and 18 to approximations by quasi-interpolation error-function-based fuzzy neural network operators, in the univariate as well as the multivariate case. In Chapter 22 multivariate fuzzy-random error-function-relied neural network approximations are presented. Finally, Chapters 23 to 28 are dedicated to approximations with perturbed neural networks. In a perturbed neural network, the usual sample coefficients \(f(k/n)\) are replaced by perturbed and weighted coefficients. This way, all types of neural network operators that have been presented before, generate new types of network operators, the so-called perturbed normalized neural network operators, fuzzy perturbed normalized neural network operators or perturbed normalized fuzzy-random neural network operators. Convergence, rates of convergence, Jackson-type inequalities, Voronovskaya-type asymptotic expansions are topics of the investigation of approximations with perturbed neural network operators, in this part of the book. To each chapter references are added. Mostly, the author's published papers are referred to, and the paper by Cardaliaguet and Euvrard [loc. cit.] which seems to have priority in defining the special types of neural network operators that have been so extensively studied in this monograph. The formal presentation by Springer is excellent.
    0 references
    artificial neural networks
    0 references
    neural network operators
    0 references
    approximation of functions
    0 references
    convergence
    0 references
    rates of convergence
    0 references
    fuzzy neural network operators
    0 references
    Caputo fractional derivatives
    0 references
    Jackson-type inequalities
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references