A common network architecture efficiently implements a variety of sparsity-based inference problems
DOI10.1162/NECO_A_00372zbMATH Open1268.92001DBLPjournals/neco/CharlesGR12OpenAlexW2107306109WikidataQ40062426 ScholiaQ40062426MaRDI QIDQ2840897FDOQ2840897
Authors: Adam S. Charles, Pierre Garrigues, Christopher J. Rozell
Publication date: 23 July 2013
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco_a_00372
Recommendations
- A network of spiking neurons for computing sparse representations in an energy-efficient way
- Learning and Inference in Sparse Coding Models With Langevin Dynamics
- Belief Propagation in Networks of Spiking Neurons
- A sparse coding model based on structural similarity
- Deep Learning as Sparsity-Enforcing Algorithms
Applications of statistics to biology and medical sciences; meta analysis (62P10) Computational methods for problems pertaining to biology (92-08)
Cites Work
- The Adaptive Lasso and Its Oracle Properties
- Robust regression: Asymptotics, conjectures and Monte Carlo
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Regularization of Wavelet Approximations
- Local Strong Homogeneity of a Regularized Estimator
- Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization
- Neural networks and physical systems with emergent collective computational abilities
- Block-Sparse Signals: Uncertainty Relations and Efficient Recovery
- Wavelet-based image estimation: an empirical Bayes approach using Jeffrey's noninformative prior
- Title not available (Why is that?)
- Internal representations for associative memory
Cited In (1)
This page was built for publication: A common network architecture efficiently implements a variety of sparsity-based inference problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2840897)