A theory of capacity and sparse neural encoding

From MaRDI portal
Publication:6079092

DOI10.1016/J.NEUNET.2021.05.005zbMATH Open1521.68109arXiv2102.10148MaRDI QIDQ6079092FDOQ6079092


Authors: Pierre Baldi, Roman Vershynin Edit this on Wikidata


Publication date: 28 September 2023

Published in: Neural Networks (Search for Journal in Brave)

Abstract: Motivated by biological considerations, we study sparse neural maps from an input layer to a target layer with sparse activity, and specifically the problem of storing K input-target associations (x,y), or memories, when the target vectors y are sparse. We mathematically prove that K undergoes a phase transition and that in general, and somewhat paradoxically, sparsity in the target layers increases the storage capacity of the map. The target vectors can be chosen arbitrarily, including in random fashion, and the memories can be both encoded and decoded by networks trained using local learning rules, including the simple Hebb rule. These results are robust under a variety of statistical assumptions on the data. The proofs rely on elegant properties of random polytopes and sub-gaussian random vector variables. Open problems and connections to capacity theories and polynomial threshold maps are discussed.


Full work available at URL: https://arxiv.org/abs/2102.10148




Recommendations




Cites Work


Cited In (6)





This page was built for publication: A theory of capacity and sparse neural encoding

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6079092)