Hebbian learning and negative feedback networks. (Q1770844)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Hebbian learning and negative feedback networks.
scientific article

    Statements

    Hebbian learning and negative feedback networks. (English)
    0 references
    0 references
    7 April 2005
    0 references
    This book is concerned with developing unsupervised learning procedures and building self-organising network modules that can capture regularities of the environment. The author begins with a simple negative feedback network and shows that it has several interesting statistical properties. The self-adaptive capabilities of the neural networks are then enhanced by introducing various data exploration tools which are able to self-adapt in order to find various types of structures in data sets. Negative feedback neural networks are the central theme of the book. The author analyses several specific architectures and the associated learning mechanisms. The architectures constructed present several interesting statistical properties. They are able to self-organise in order to identify the principal component filter of a data set such that the most relevant information can be determined automatically. All the models are based on unsupervised, Hebbian learning, and use only local information to self-organise, there is no globally collected information available throughout the networks. The first part of the book deals with extracting information from a single stream of artificial data. The author reviews simple unsupervised learning rules, basic artificial network architectures, and introduces Hebbian learning, information theory and principal component analysis. An interesting variation of negative feedback networks is obtained by allowing the feedforward weights to be different from feedback weights. The ability of negative feedback networks to self-organise in order to find the principal component of input data is analysed both theoretically and experimentally. Interesting theoretical results are presented. The effects of trainable lateral connections and of the constraints on the weights on the neural network convergence are investigated. Variations of negative feedback networks that are able to deal with high-dimensional data sets are introduced and tested on both artificial and real data and analysed in terms of principal component analysis. Several variations that perform topology preserving quantisation of the data set are also presented. The second part of the book deals with information which is shared over two data streams simultaneously, investigating several types of dual stream architectures which are tested on both real and artificial data. Several interesting topics are investigated, from the artificial neural networks which perform Canonical Correlation Analysis in chapter nine to nonlinearities and multicollinearity in chapters 10--14. Techniques of Exploratory Correlation Analysis are developed in order to find higher-order structures shared between two streams. Twinned Principal Curves, a method for combining information from two data sources with nonlinear underlying correlation is also considered. The last chapter provides a brief review and an interesting insight into future research directions. Overall, the book provides a detailed introduction to Hebbian learning and negative feedback neural networks and is suitable for self-study or instruction in an introductory course.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    unsupervised learning
    0 references
    negative feedback networks
    0 references
    principal component analysis
    0 references