Information and Topology in Attractor Neural Networks

From MaRDI portal
Publication:3440422




Abstract: A wide range of networks, including small-world topology, can be modelled by the connectivity gamma, and randomness omega of the links. Both learning and attractor abilities of a neural network can be measured by the mutual information (MI), as a function of the load rate and overlap between patterns and retrieval states. We use MI to search for the optimal topology, for storage and attractor properties of the network. We find that, while the largest storage implies an optimal MI(gamma,omega) at gammaopt(omega)o0, the largest basin of attraction leads to an optimal topology at moderate levels of gammaopt, whenever 0leqomega<0.3. This gammaopt is related to the clustering and path-length of the network. We also build a diagram for the dynamical phases with random and local initial overlap, and show that very diluted networks lose their attractor ability.









This page was built for publication: Information and Topology in Attractor Neural Networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3440422)