Learning hierarchically-structured concepts
From MaRDI portal
Publication:6055133
DOI10.1016/j.neunet.2021.07.033zbMath1521.68142arXiv1909.04559OpenAlexW3193986166MaRDI QIDQ6055133
Frederik Mallmann-Trenn, Nancy A. Lynch
Publication date: 28 September 2023
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1909.04559
spiking neural networkshierarchical conceptsbrain-inspired algorithmslearning hierarchical conceptsrecognizing hierarchical conceptsrepresenting hierarchical concepts
Related Items (2)
A basic compositional model for spiking neural networks ⋮ Learning hierarchically-structured concepts. II: Overlapping concepts, and networks with feedback
Cites Work
- Unnamed Item
- Unnamed Item
- A simplified neuron model as a principal component analyzer
- The Hippocampus as a Stable Memory Allocator for Cortex
- A neuroidal architecture for cognitive computation
- Computational Tradeoffs in Biological Neural Networks: Self-Stabilizing Winner-Take-All Networks
- Memorization and Association on a Realistic Neural Model
- Spiking Neuron Models
- Lower Bounds for the Computational Power of Networks of Spiking Neurons
- Long Term Memory and the Densest K-Subgraph Problem
- Memory Capacities for Synaptic and Structural Plasticity
- Spike-Based Winner-Take-All Computation: Fundamental Limits and Order-Optimal Circuits
- Random Sketching, Clustering, and Short-Term Memory in Spiking Neural Networks.
- Concentration of Measure for the Analysis of Randomized Algorithms
This page was built for publication: Learning hierarchically-structured concepts