Information and self-organization. A macroscopic approach to complex systems. (Q5904059)

From MaRDI portal
scientific article; zbMATH DE number 1381982
Language Label Description Also known as
English
Information and self-organization. A macroscopic approach to complex systems.
scientific article; zbMATH DE number 1381982

    Statements

    Information and self-organization. A macroscopic approach to complex systems. (English)
    0 references
    0 references
    2 January 2000
    0 references
    The first edition has already been reviewed (1988; Zbl 0659.93002) but we propose a more detailed account so that the content can be understood. Aristotle expressed a global extremal principle when considering the motion of planets: they should move as quickly as possible. Optimal motions being perfect, the trajectories had to be circles and the speed constant. This allowed to transfer the temporal characterization in the spatial domain but it was almost two thousand years later that Newton showed that this picture had to be modified: the speed changes and this is described locally by a force (to be specified); (we do here an injustice to Kepler who died before Newton was born and who knew that the speed of the motion of the planets was changing based on global considerations). Newton's view led to subsequent developments: mechanics, dynamics, Riemannian geometry, the calculus of variations\dots{} The multiplication of scientific areas leads to more precision and organization but it might be illuminating to look back at the informative potentialities of generating ideas. From the temporal characterization of Aristotle one may ask oneself if the world not only saves energy as much as possible but also wants to communicate as much as possible and the moving object is then seen as a signal. Information is difficult to tackle because like Newton's force it refers to diversity. In the twentieth century Shannon has attempted to quantify it by considering a probabilistic description and the associated entropy; Jaynes formulated extremal principles for this entropy and made connections with thermodynamics which arose from statistical physics with Newton's picture at the microscopic level. Around the same time, Von Neumann realized that entropy had to do with self-organization of systems at the macroscopic level. This has lead to subsequent work with various degrees of success and quality [for a taste see the survey of \textit{H. Atlan}, Information theory. Cybernetics, theory and applications, 9-41 (1983; Zbl 0584.94005)] and sometimes misunderstandings. In this volume the author presents a mechanism for explaining the emergence of macroscopic patterns from a disorganized micoscopic structure. This is possible in the context of nonequilibrium phase transitions in a probabilistic description which connects a Newton type microscopic stochastic dynamics with distributions obtained via an extremum principle for the information (or entropy in the sense of Shannon) under suitable constraints modelling macroscopic variables. So singularities (nonlinearities) are the key bringing together the perspectives of Yaynes (Shannon) and von Neumann. They determine the macroscopic patterns and the associated meaning. It should be emphasized that the constraints for determining the shape of the distribution are not given systematically so that the impact of diversity is displaced from information to them. Nevertheless the author illustrates convincingly his thesis through several examples. Let us also point out that energy and information are quite subtly related in this picture. Unstructured energy is pumped into the system which is open, structured energy appears in the form of a potential in the distribution in relation with the microscopic dynamics but this potential has not to be confused with the one used in statistical physics (modelling interactions between particles), energy may appear possibly implicitly as a constraint when deriving the optimal distribution linking the two previous forms. In chapter one the author describes first a variety of situations where complex systems are encountered. Complex systems are sets of various objects with different types of relations between them. Then he explains that complex systems may be understood by decomposing them in elementary parts (cf. statistical physics) or through macroscopic variables which capture the relevant phenomena (cf. thermodynamics or synergetics when the system is far from thermal equilibrium). The main topic of the book is then defined: self-organization happens if a structure is created without specific inputs acting on the system. Synergetics is seen as the discipline for studying open systems (so its relation to thermodynamics is analogous to the relation between control and dynamics) whereby as the unspecific input reaches a certain level, the system reorganizes itself so that macroscopic variables supervise the microscopic parts which behave in a coherent fashion. Next, the author introduces information starting from Shannon's formula and tries to define meaning via relations between sets of messages and objects which represent the signification of the message. The need for introducing dynamics, attractors and probabilities is not entirely clear. It allows however to use Shannon's formula in this new context and to define information deficiency, the extreme case being where all the words are synonyms. This quantitative measure is not related to the semantic content of the information. It allows however to quantify the passage from an information rich situation where subsystems send signals to themselves leading to an amplification effect to an information poor situation, which results, after the critical level is reached to the emergence of a coherent macroscopic structure which informs the whole set of subsystems of how to behave through an economical signal. The information of an atom in a laser is computed and a plateau appears when the pumping reaches a critical level. The mechanism described is similar to statistical physics (from microscopic to macroscopic); the other way proposed is to start from macroscopic quantities (their choice is not clear) seen as constraints in an optimization problem for the information allowing to derive probabilities for the microscopic level. So in this chapter central ideas of the book are introduced and they will be developed subsequently. In chapter two the author examines how to derive macroscopic quantities from microscopic ones. The latter are described by a Langevin equation, i.e. a Newton type law with a random perturbation (white noise). The macroscopic world is represented by the corresponding density for the state and is obtained by solving a Fokker-Planck equation. The form of the stationary solution for a constant diffusion term is obtained. For more general diffusion terms but assuming that the stationary joint probability satisfies some reversibility properties (Systems in detailed balance) a method to derive the stationary solution to the Fokker-Planck equation is stated. The solution can also be expressed by path integrals. Next, one linearizes the Langevin equation and one performs a decomposition into stable and unstable modes. The latter will determine the macroscopic behaviour. The origin of the stated slaving principle is not explained (the reader is sent to other books of the author) but it allows to decouple the unstable dynamics from the stable one. The resulting nonlinear terms are at the origin of the formation of patterns (nonequilibrium phase transition) through the stationary solution to the associated Fokker-Planck equation. This solution is the right factor in the decomposition of the joint probability for unstable and stable modes \[ P(\xi_u,\xi_s)= P(\xi_s|\xi_u) f(\xi_u). \tag{1} \] The conditional probability is obtained by using the density of the random disturbance affecting the stable dynamics in stationary conditions. Let us observe that the control parameter introduced page 46 disappears in formula (2.72) and afterwards so that it cannot be understood why it is needed. Probably it has some stabilization purpose but the usual control theoretic context is missing. This portion of the text including the slaving principle require a clarification. Chapter three defines information of a probability distribution in the sense of Shannon and shows that it is maximized for the uniform distribution. The solution to the optimization problem differs however if constraints are present. These constraints come from macroscopic quantities in the context of this study. The new optimal distribution is obtained. A sensitivity analysis of the entropy as a function of the perturbed constraints is provided. Finally, the author points out some technical difficulties in the continuous case: the integral defining the information is diverging. In chapter four, the author applies the results of the previous chapter to the situation of classical thermodynamics. The constraint is now the energy and the optimal distribution is the Boltzmann distribution. The corresponding information coincides with the usual thermodynamic entropy. The sensitivity analysis allows to recover and define the temperature, the pressure and the chemical potentials. Chapter five is an application of the maximum information principle to systems which are not in thermal equilibrium with phase transition. Here one deals with lasers and the constraints are the first and second moments of the intensity of the electric field. When the latter has several modes one has to consider correlations between modes as constraints. The author bets that the same point of view is valid for phase transitions in thermal equilibrium. Chapter six considers general nonequilibrium phase transitions. The constraints are taken as moments of the states \(q\) up to order four. The maximum information principle provides a distribution of the form \(P= \exp (V(\lambda, q))\) where \(\lambda\) represents Lagrange multipliers and \(V\) is a polynomial of order four in the states. The phase transition eliminates the linear terms and a rearrangement of the other terms allows to recover a formula like (1), i.e. a version of the ``slaving principle''. The drift of the associated Langevin equation may also be recovered from the constructed quantities. In chapter seven, one studies the behaviour of the information when a nonequilibrium phase transition is taking place. First, the author derives from (1) and Shannon's formula the form of the information for a master-slave system. One has a term from the master added to weighted contributions from the slaves. A similar form is obtained for the information gain (Kullback information). Next, the eigenvalue of the unstable mode (supposed unique for the most part of the chapter) is seen as a bifurcation parameter modelling the phase transition. The order parameter (unstable mode) is seen to be the main cause of information change. The single well of the potential undergoes a symmetry breaking at the critical point and a double well is created. One bit of information can be stored. This increase of information (and entropy) is not in contradiction with the fact that there is more order: we are in a nonequilibrium situation and energy has been pumped into the system. Following Klimontovich, this effect is quantified in the case of a laser: the energy changes before, at and after threshold are compensated resulting in a constant energy level and the corresponding change of entropy is such that it decreases. A section is devoted to studying the information change of the enslaved modes: it decreases as energy is pumped into the system as the Gaussian peak is becoming narrower; this is quantified in the laser example. An analytical study in terms of special functions shows that the information and information gains due to the order parameters are maximal at the critical point. Numerically back to the laser case, the information changes at this point is hundred times larger (in intensity) for the order parameter than for a stable mode as the bifurcation parameter changes. The next chapter explains how the parameters of the Gaussian for the conditional probability (in relation to the stable modes) and those for the distribution function of the order parameters (with quadratic potential) can be obtained from the moments up to order four. Since there are more equations than unknowns, the remaining equations can be tested for checking the validity of the hypotheses. Chapter nine deals with the modelling of markovian stochastic processes. The conditional probability (given the previous state \(q_i\)) is assumed to take the form \(P(q_{i+1}|q_i)= \exp (\lambda+ \lambda_1 q_{i+1}+ \lambda_2 q_{i+1}^2)\) and moments up to order two of \(q_{i+1}\) are assumed as constraints. Some manipulations with the parameters of the potential allow to express a drift and a diffusion term for a related Fokker-Planck equation which is solved by \(P\). The resulting drift is a model for an underlying (newtonian) dynamics. A section considers the case when correlation functions are used as constraints. Chapter ten develops further on lasers. The electric field has now a spatial dependence. Moments up to order four are used as constraints. In the single mode case with polarization and inversion additional moments up to order four are included for the model but many vanish and the resulting distribution is tractable. Chapter eleven explores another field of application in biology: the parallel motion of fingers of two hands experiences a kind of phase transition when the frequency is increased. Antiparallel motion sets in moments up to fourth order of the motions are considered but odd moments vanish. The distribution function depends on the phase shift between the two motions and a phase transition (with the two meanings of the word phase) between 0 and \(\pi\) occurs. Using the dynamics (the corresponding Langevin equation), formulas are derived for the average phase shift and its standard deviation. They agree with experimental data. But the parameters of the dynamics cannot be determined exactly with moment information and an additional relaxation time has to be used. Chapter twelve applies the author's approach to pattern and process recognition. Features are seen as components of a vector representing a pattern. Noisy measurements impose a probabilistic description of the patterns. The matrix of correlations is diagonalised and the information for the associated distribution is computed. The eigenvalues are interpreted as order parameters (but everything is linear here). A synergetic computer with associative memory will send incoming possibly incomplete patterns to already encoded known complete patterns via a potential function with several minima (polynomial of order four) being reached when the incoming pattern vector is parallel to the encoded one. Of course the coefficients of the polynomial (i.e. the order parameters) shape the potential and they can be learned via a gradient algorithm which extremizes the information gain between the distribution of the incoming patterns and the one in the system. Next the author concentrates on Markovian processes. He assumes that the joint probability can be written \[ P(q_{i+\tau}, q_i)= P(q_{i+\tau} |q_i) P_{st} (q_i) \] where \(P_{st}\) is a steady state probability distribution determined by constraints of correlation type and the conditional probability is determined by moments up to order two (cf. chap. nine). Now the constraints depend on \(q_i\) so that the stochastic dynamic model involves a state dependent drift and diffusion matrix. Next, for the conditional (resp. the joint probability), the author proposes a learning scheme for the order parameters through a gradient algorithm which extremizes the information gain between a probability distribution characterized by experimentally determined moments (resp. correlations) and the given analytic one (solving an optimal information principle). The stationary conditions lead to the determination of the drift and diffusion matrix (state dependent) of the corresponding Fokker-Planck equation from experimental data. The convergence of the algorithms is not established and moreover the determination of the drift in the second case assumes that it is allowed to use the formula for the drift in the first case (coming from stationarity). This point should be justified. Some examples and refinements end this chapter. Testing the markovian character of a process is related to chaotic dynamics and time-series analysis. The last chapter is a quantum version of the previous picture. One replaces canonically classical variables by operators and one proceeds to a parallel approach. The technical difficulties due to the commutative property of the operators can be handled without affecting the final form of the results. Formula (13.59) page 204 assumes that order parameters can be described classically suggesting implicitly that the classical quantities would arise as a phase transition from the quantum ones but this assumption could be logically shaky given that canonical quantization which is used is not justified either. A conclusion, bibliographical sources and references end this volume. Few typographical mistakes were detected: page 56, one has \(M\) containers and not \(C\) containers and page 124, \(\langle u^4\rangle\) and \(\langle u^2\rangle^2\) have to be interchanged in formula (8.90). Note also that page 59 and 60 the notation in formula (3.43) is awkward because \(\langle f_i^{(k)}\rangle\) depends on \(i\) and is identical with \(f_k\) in formula (3.34). Let us observe a difference in points of view when thermodynamics is implied by synergetics in chapter four but it is also seen as a limiting case encompassing synergetic subsystems page 14. Let us remark that this work treats simple forms of organizations through the master-slave relation which is typical in biology. For social or cognitive systems other forms of organization might be considered like concurrence. Let us however observe that this study provides insight concerning the genesis of dictatorial systems after a disordered period (cf. the economic crisis in Germany before the rise of Hitler, or the French revolution before the rise of Napoleon), and the conscious individual cannot do much because not only the orders of the authority but the inertia of the masses which communicate with uniform information poor signals maintain the system. The book is written by a physicist as can be recognized by the emphasis on intuitive ideas more than technicalities, by confirming an analysis by asymptotic expansions and computations, by studying several experimental situations to illustrate the ideas (more than proving them) and by a sentence like this one located page 79: ``Therefore we must look for another criterion and the one we choose is simplicity''. This volume is clearly written and can be understood by a wide audience which is a quality. Several aspects of the book make connections with classical control theory like the stabilization picture for enslaved modes and identification of stochastic processes although the expression does not appear in the book. Several publications not cited in the book have dealt with the use of the Kullback information for identification and modelling (P. Dhrymes, R. Kulhavy and al., V. Krishnamurty) or of the Fokker-Planck equation for the same purpose (G. Inglese, B. Stepinski). The concept of entropy is used in several contexts in control theory (although the energy aspect due to the huge (excessive?) influence of optimal control is predominant) but the one closest in spirit to the author's view can be found in publications of \textit{P. M. Auger} [e.g. Syst. Res. 7, No. 4, 221-236 (1990; Zbl 0739.93004)] where entropy computations in hierachically organized systems are performed. Let us mention the work of G. Jumarie and also the interesting publication of \textit{M. Pavon} [Appl. Math. Optimization 19, No. 2, 187-202 (1989; Zbl 0664.60046)] where the entropy is the value function for a stochastic optimal control problem providing a Hamilton-Jacobi theory for nonequilibrium thermodynamical systems. Moreover, the input-output approach in systems science seems well suited to the open systems considered here and one may suggest not only to describe macroscopic patterns but to regulate them. Finally let us mention R. Thom (not cited) who in his theory of catastrophes considers structural stability (instead of bifurcations or phase transitions) in relation to singularities and morphogenesis. This work presents an attempt for unravelling unifying principles for understanding and dealing with diversity. This goal has been one of the main preoccupation of religious and scientific activity. Page 14 the author states: ``Synergetics is very much an open-ended field in which we have made only the very first steps'' and later ``still more [general] laws can be found''. He adds that his program is not finished and leaves space for future research. Page 209, one finds in the conclusion ``complex systems seem to be inexhaustible''. So thanks (the) God(s) diversity is there to keep us busy.
    0 references
    complex systems
    0 references
    pattern recognition
    0 references
    nonequilibrium phase transitions
    0 references
    information
    0 references
    synergetics
    0 references
    self-organization
    0 references
    open systems
    0 references
    laser
    0 references
    Fokker-Planck equation
    0 references
    Langevin equation
    0 references
    entropy
    0 references
    symmetry breaking
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references