Estimating the entropy of binary time series: methodology, some theory and a simulation study (Q845354): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
Set OpenAlex properties.
Property / OpenAlex ID
 
Property / OpenAlex ID: W3102114848 / rank
 
Normal rank

Revision as of 21:52, 19 March 2024

scientific article
Language Label Description Also known as
English
Estimating the entropy of binary time series: methodology, some theory and a simulation study
scientific article

    Statements

    Estimating the entropy of binary time series: methodology, some theory and a simulation study (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    29 January 2010
    0 references
    Summary: Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the ``renewal entropy estimator,'' which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    entropy estimation
    0 references
    Lempel-Ziv coding
    0 references
    context-tree-weighting
    0 references
    simulation
    0 references
    spike trains
    0 references
    0 references