The measures of sequence complexity for EEG studies (Q1342152)

From MaRDI portal
scientific article
Language Label Description Also known as
English
The measures of sequence complexity for EEG studies
scientific article

    Statements

    The measures of sequence complexity for EEG studies (English)
    0 references
    0 references
    0 references
    0 references
    17 August 1995
    0 references
    The authors consider sequences of data which take a finite number of values. The sequences of data form points of a symbol space. They introduce two measures of complexity of sequences, which they call \(c_ 1\) and \(c_ 2\). The complexity measure \(c_ 1\) is related to the entropy of the shift map on the space of allowable data points. It measures the rate of growth of the number of words of length \(n\) in the first \(n \cdot 2^ n\) terms of the sequences. The second measure \(c_ 2\) measures the rate of growth of the number of words of length \(n\) which do not occur, but which have the property that the subword of length \(n - 1\) obtained by right truncation does occur again in the first \(n \cdot 2^ n\) terms of the sequence. The authors point out that for periodic systems \(c_ 1\) and \(c_ 2\) are 0; for `completely random systems' (e.g. the full 2-shift), \(c_ 1\) is non-zero, while \(c_ 2\) is 0. They contend that this means that the dynamics are `simple but not complex', and argue that \(c_ 2\) is the more genuine measure of complexity. They then estimate numerically the values of \(c_ 1\) and \(c_ 2\) as a function of parameter for the family of logistic maps \(x \mapsto \lambda x(1-x)\). The authors then apply the methods introduced to data coming from EEG studies of brain activity. They unfortunately introduce some probabilities (which come from averaging trials), but fail to describe the meaning of these. Using entropy theory, it can be seen that the quantities \(C_ 1\) and \(C_ 2\) which they define converge to 0 as the sequence length is taken to \(\infty\). As \(C_ 1\) and \(C_ 2\) are meant to be analogues of \(c_ 1\) and \(c_ 2\) above, this is unsatisfactory, and it would probably be more sensible to define \[ C_ 1 = n^{-1} \sum_ i p_{ai} \log_ 2 N_{ai} \quad \text{and}\quad C_ 2 = n^{-1} \sum_ i p_{ai} \log_ 2 N_{fi}. \] Some conclusions are drawn on the basis of the \(C_ 1\) and \(C_ 2\) values.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    sequence complexity
    0 references
    completely random systems
    0 references
    measures of complexity of sequences
    0 references
    entropy
    0 references
    shift map
    0 references
    periodic systems
    0 references
    logistic maps
    0 references
    EEG
    0 references
    brain activity
    0 references
    0 references