A simple method for estimating the entropy of neural activity
From MaRDI portal
Publication:3301563
Recommendations
- Estimation bias in maximum entropy models
- Estimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity
- Estimating Entropy Rates with Bayesian Confidence Intervals
- Estimating the Temporal Interval Entropy of Neuronal Discharge
- Tight Data-Robust Bounds to Mutual Information Combining Shuffling and Model Selection Techniques
Cites work
- A Generalization of Sampling Without Replacement From a Finite Universe
- A Mathematical Theory of Communication
- Estimating the unseen, an \(n/\log(n)\)-sample estimator for entropy and support size, shown optimal via new CLTs
- Estimation of Entropy and Mutual Information
- Information Theory and Statistical Mechanics
- NOTES ON BIAS IN ESTIMATION
- THE POPULATION FREQUENCIES OF SPECIES AND THE ESTIMATION OF POPULATION PARAMETERS
- The Complexity of Approximating the Entropy
Cited in
(10)- Operations research methods for estimating the population size of neuron types
- Estimation bias in maximum entropy models
- The measurement of information transmitted by a neural population: promises and challenges
- Dynamic Multiscale Modes of Resting State Brain Activity Detected by Entropy Field Decomposition
- Markov dependency based on Shannon's entropy and its application to neural spike trains
- The population tracking model: a simple, scalable statistical model for neural population data
- Estimating the Temporal Interval Entropy of Neuronal Discharge
- Estimating Entropy Rates with Bayesian Confidence Intervals
- Estimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity
- The Limits of Counting Accuracy in Distributed Neural Representations
This page was built for publication: A simple method for estimating the entropy of neural activity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3301563)