Information and Topology in Attractor Neural Networks

From MaRDI portal
Publication:3440422

DOI10.1162/NECO.2007.19.4.956zbMATH Open1118.68116DBLPjournals/neco/DominguezKSR07arXivcond-mat/0506535OpenAlexW2130189314WikidataQ48248988 ScholiaQ48248988MaRDI QIDQ3440422FDOQ3440422


Authors:


Publication date: 22 May 2007

Published in: Neural Computation (Search for Journal in Brave)

Abstract: A wide range of networks, including small-world topology, can be modelled by the connectivity gamma, and randomness omega of the links. Both learning and attractor abilities of a neural network can be measured by the mutual information (MI), as a function of the load rate and overlap between patterns and retrieval states. We use MI to search for the optimal topology, for storage and attractor properties of the network. We find that, while the largest storage implies an optimal MI(gamma,omega) at gammaopt(omega)o0, the largest basin of attraction leads to an optimal topology at moderate levels of gammaopt, whenever 0leqomega<0.3. This gammaopt is related to the clustering and path-length of the network. We also build a diagram for the dynamical phases with random and local initial overlap, and show that very diluted networks lose their attractor ability.


Full work available at URL: https://arxiv.org/abs/cond-mat/0506535




Recommendations




Cites Work


Cited In (6)





This page was built for publication: Information and Topology in Attractor Neural Networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3440422)