Markov dependency based on Shannon's entropy and its application to neural spike trains
DOI10.1109/TSMC.1983.6313062zbMATH Open0543.92008OpenAlexW2025100902MaRDI QIDQ3332811FDOQ3332811
Hisashi Fujii, K. Shima, Hiroshi Nakahama, Kojiro Aya, Mitsuaki Yamamoto
Publication date: 1983
Published in: IEEE Transactions on Systems, Man, and Cybernetics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tsmc.1983.6313062
time seriesprediction errorconditional entropyShannon's entropytruncation methodMarkov propertiesGaussian assumptionneuronal spike trainsautoregressive analysisensemble dependency analysisneural modulationsimplified dependency
Time series, auto-correlation, regression, etc. in statistics (GARCH) (62M10) Physiological, cellular and medical topics (92Cxx)
Cited In (2)
Recommendations
- Estimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity π π
- Entropy, mutual information, and systematic measures of structured spiking neural networks π π
- Efficient Markov chain Monte Carlo methods for decoding neural spike trains π π
- Title not available (Why is that?) π π
- A simple method for estimating the entropy of neural activity π π
- Estimating statistics of neuronal dynamics via Markov chains π π
- Probabilistic inference of binary Markov random fields in spiking neural networks through mean-field approximation π π
This page was built for publication: Markov dependency based on Shannon's entropy and its application to neural spike trains
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3332811)