Maximum entropy fundamentals (Q1613055): Difference between revisions
From MaRDI portal
Set profile property. |
Created claim: Wikidata QID (P12): Q56157779, #quickstatements; #temporary_batch_1712201099914 |
||
(One intermediate revision by one other user not shown) | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2050249319 / rank | |||
Normal rank | |||
Property / Wikidata QID | |||
Property / Wikidata QID: Q56157779 / rank | |||
Normal rank |
Revision as of 11:26, 4 April 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Maximum entropy fundamentals |
scientific article |
Statements
Maximum entropy fundamentals (English)
0 references
10 September 2002
0 references
Summary: In its modern formulation, the maximum entropy principle was promoted by E. T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the ``observer'' and on coding. This view was brought forward by the second named author in the late seventies and is the view we follow here. It leads to the consideration of a certain game, the code length game, and, via standard game theoretical thinking, to a principle of game theoretical equilibrium. This principle is more basic than the maximum entropy principle in the sense that the search for one type of optimal strategies in the code length game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned based on a study of the code length game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a first reading. The most frequently studied instance of entropy maximization pertains to the mean energy model, which involves a moment constraint related to a given function, here taken to represent ``energy''. This type of application is very well known from the literature with hundreds of applications pertaining to several different fields and will also serve here as an important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow for a discussion of models with so-called entropy loss. These results have tempted us to speculate over the development of natural languages. In fact, we are able to relate our theoretical findings to the empirically found Zipf's law, which involves statistical aspects of words in a language. The apparent irregularity inherent in models with entropy loss turns out to imply desirable stability properties of languages.
0 references
maximum entropy
0 references
minimum risk
0 references
game theoretical equilibrium
0 references
information topology
0 references
Nash equilibrium code
0 references
entropy loss
0 references
partition function
0 references
exponential family
0 references
continuity of entropy
0 references
hyperbolic distributions
0 references
Zipf's law
0 references
development of natural languages
0 references