The umbral transfer-matrix method. V: The Goulden-Jackson cluster method for infinitely many mistakes (Q2778469)

From MaRDI portal





scientific article; zbMATH DE number 1716043
Language Label Description Also known as
default for all languages
No label defined
    English
    The umbral transfer-matrix method. V: The Goulden-Jackson cluster method for infinitely many mistakes
    scientific article; zbMATH DE number 1716043

      Statements

      0 references
      2 April 2002
      0 references
      umbral transfer-matrix method
      0 references
      Goulden-Jackson cluster method
      0 references
      generating functions
      0 references
      generating enumerating words
      0 references
      mistakes
      0 references
      self-avoiding walks
      0 references
      0 references
      0 references
      0 references
      The umbral transfer-matrix method. V: The Goulden-Jackson cluster method for infinitely many mistakes (English)
      0 references
      0 references
      This paper is on the umbral transfer-matrix method, based on Gian-Carlo Rota's seminal concept of the umbra. It is one of the series of papers on this topic written by the author. The classical Goulden-Jackson cluster method is available to compute generating functions enumerating words that avoid, as factors, any given finite set of mistakes. Here the powerful Goulden-Jackson cluster method is extended to the situation to compute generating enumerating words avoiding infinitely many mistakes. The goal achieved here is: Given a finite alphabet and a finite set of symbolic words (called mistakes), compute the generating function \(f(t):= \sum_{n=0} b_nt^n\), where \(b_n\) denotes the number of \(n\)-letter words in the alphabet not containing, as factors, any of these symbolic mistakes. A simple illustrative example of computing \(f(t)\) based on the alphabet \(\{1,2,3\}\) and avoiding any factor of the form \(A:= 1\overset{a+1} 2\;\overset{a+1} 3 1\), i.e., avoiding \(1231\), \(122331\), \(12223331,\dots\) as factors, i.e., avoiding infinitely many mistakes. Finally, the method is illustrated by introducing a new `fancy toy model' for self-avoiding walks which is more interesting than finite-memory approximations.
      0 references
      0 references

      Identifiers