A network of spiking neurons for computing sparse representations in an energy-efficient way
From MaRDI portal
Publication:2840871
Abstract: Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise, specifically, the representation error decays as 1/sqrt(t) for Gaussian white noise.
Recommendations
- A common network architecture efficiently implements a variety of sparsity-based inference problems
- Efficient Computation Based on Stochastic Spikes
- Finding independent components using spikes: A natural result of Hebbian learning in a sparse spike coding scheme
- Lower Bounds for the Computational Power of Networks of Spiking Neurons
- Adapting spiking neural networks
Cites work
- scientific article; zbMATH DE number 1983334 (Why is no real title available?)
- Atomic Decomposition by Basis Pursuit
- Convergence of the linearized Bregman iteration for \(\ell _1\)-norm minimization
- Convex optimization theory.
- Coordinate descent optimization for \(l^{1}\) minimization with application to compressed sensing; a greedy algorithm
- Equality relating Euclidean distance cone to positive semidefinite cone
- Fast linearized Bregman iteration for compressive sensing and sparse denoising
- Least angle regression. (With discussion)
- Linearized Bregman iterations for compressed sensing
- Pathwise coordinate optimization
- Regularization and Variable Selection Via the Elastic Net
Cited in
(4)- Short-term memory capacity in networks via the restricted isometry property
- A theoretical perspective on hyperdimensional computing
- A common network architecture efficiently implements a variety of sparsity-based inference problems
- ACCURATE, ENERGY-EFFICIENT CLASSIFICATION WITH SPIKING RANDOM NEURAL NETWORK
This page was built for publication: A network of spiking neurons for computing sparse representations in an energy-efficient way
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2840871)