Estimating Information Rates with Confidence Intervals in Neural Spike Trains
From MaRDI portal
Publication:5457581
DOI10.1162/neco.2007.19.7.1683zbMath1146.68435OpenAlexW2106012167WikidataQ51913983 ScholiaQ51913983MaRDI QIDQ5457581
Henry D. I. Abarbanel, Jonathon Shlens, E. J. Chichilnisky, Matthew B. Kennel
Publication date: 14 April 2008
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco.2007.19.7.1683
Learning and adaptive systems in artificial intelligence (68T05) Neural biology (92C20) Statistical aspects of information-theoretic topics (62B10)
Related Items (4)
Synergy, redundancy, and multivariate information measures: an experimentalist's perspective ⋮ Indices for Testing Neural Codes ⋮ A Finite-Sample, Distribution-Free, Probabilistic Lower Bound on Mutual Information ⋮ Bayesian and quasi-Bayesian estimators for mutual information from discrete data
Cites Work
- Unnamed Item
- A Mathematical Theory of Communication
- Measuring Information Spatial Densities
- Estimating Entropy Rates with Bayesian Confidence Intervals
- On the Complexity of Finite Sequences
- The Stationary Bootstrap
- Nonparametric entropy estimation for stationary processes and random fields, with applications to English text
- Automatic Block-Length Selection for the Dependent Bootstrap
- Entropy estimation of symbol sequences
- Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Encoding Model
- Computation in a Single Neuron: Hodgkin and Huxley Revisited
- Estimation of Entropy and Mutual Information
- Estimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity
- Analyzing Neural Responses to Natural Signals: Maximally Informative Dimensions
- The context-tree weighting method: basic properties
- Monte Carlo sampling methods using Markov chains and their applications
- Distribution of mutual information
This page was built for publication: Estimating Information Rates with Confidence Intervals in Neural Spike Trains