Radial basis function networks 1. Recent developments in theory and applications (Q5936850)

From MaRDI portal
Revision as of 20:26, 21 December 2023 by Importer (talk | contribs) (‎Created a new Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article; zbMATH DE number 1615701
Language Label Description Also known as
English
Radial basis function networks 1. Recent developments in theory and applications
scientific article; zbMATH DE number 1615701

    Statements

    Radial basis function networks 1. Recent developments in theory and applications (English)
    0 references
    9 July 2001
    0 references
    This collection of articles about Radial Basis Function (RBF) networks covers recent advances in the field with respect to training algorithms, variations on the architecture and the functioning of the basic neurons, kernel interpretation of the RBF functioning, etc. A sister-volume in the same series focuses on implementations and applications. Chapter 1 covers dynamic RBF networks from a system theoretic perspective. Discrete-time and continuous-time RBF networks are represented. Chapter 2 discusses a decision tree initialization of an RBF network. The feature space is partitioned into relatively homogeneous hyperrectangular regions, and each region is then associated with one hidden neuron. Thus the hidden layer parameters of the RBF network are determined automatically. Hierarchical RBF networks are proposed in Chapter 3 based on a combination of linear filtering theory and multi-scale analysis. Grids of Gaussian kernels are stacked in layers with the aim to obtain a uniform residual error. Results on reconstruction of 3-D images of human faces are reported. Chapter 4 introduces RBF networks with orthogonal basis functions called the ``minimum spectral neural network''. The proposed method aims at finding a smaller model than that derived by the Support Vector Machine (SVM) technique. The minimum spectral neural network is based on natural priors of the data and the derived thereupon orthogonal Gaussian and complementary Gaussian basis functions. Chapter 5 proposes high noise-immune RBF networks by applying the least trimmed squares method. Examples of 1-D and 2-D function approximation show the superiority of the proposed RBF model over more conventional models. Monte Carlo simulations are used to illustrate the ability of the new network to control variance and bias in the estimates. Training RBF networks using robust statistics is proposed in Chapter 6. The 2-stage training algorithm consists of a Learning Vector Quantization (LVQ) stage to find the hidden units parameters, and a backpropagation stage to find the output weights. To eliminate the effect of outliers, the parameters of the hidden layer nodes are estimated by the median and the alpha-trimmed mean. Chapter 7 explains kernel methods for classification, regression and novelty detection. It is shown that in all these cases, the training is reduced to the optimization of a convex cost function. Algorithms for training kernel-based systems, including model selection are given. The focus of the chapter is on SVM, which are the most well known learning systems based on kernel methods. In Chapter 8, RBF networks are used for unsupervised learning. The Kernel Principal Component Analysis methods are reviewed in relation to Factor Analysis, Exploratory Projection Pursuit and Canonical Corelation Analysis. Illustrations on real and synthetic data are provided. The so called ``stability-plasticity dilemma'' is discussed in Chapter~9 in the context of an RBF learning in a nonstationary environment. The network responds to the changes in the environment (plasticity), preserving at the same time the learned so far (stability). A growing RBF network is proposed, updating the individual learning rate at each node. The algorithm is illustrated with artificial and real data. Autonomous learning methods are addressed in Chapter~10. Brain-like learning theory is outlined, setting up a new principle framework for neural network training. Following it, RBF networks for function approximation and classification are suggested. Chapter~11 discusses evolutionary optimization of RBF networks. Several methods from the literature are surveyed, and a training method based on a genetic algorithm with a new crossover operation is proposed.
    0 references
    Radial basis function networks
    0 references
    radial basis function networks
    0 references
    training algorithms and architectures
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references