Maximum-entropy and Bayesian methods in science and engineering. Vol. I: Foundations. Vol II: Applications. (Proceedings of the 5th, 6th and 7th workshops held at the University of Wyoming, August 5-8, 1985, and at Seattle University, August 5-8, 1986, and August 4-7, 1987) (Q1188509)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Maximum-entropy and Bayesian methods in science and engineering. Vol. I: Foundations. Vol II: Applications. (Proceedings of the 5th, 6th and 7th workshops held at the University of Wyoming, August 5-8, 1985, and at Seattle University, August 5-8, 1986, and August 4-7, 1987)
scientific article

    Statements

    Maximum-entropy and Bayesian methods in science and engineering. Vol. I: Foundations. Vol II: Applications. (Proceedings of the 5th, 6th and 7th workshops held at the University of Wyoming, August 5-8, 1985, and at Seattle University, August 5-8, 1986, and August 4-7, 1987) (English)
    0 references
    17 September 1992
    0 references
    [The articles of this volume will not be indexed individually.] The two volumes present a wide variety of topics by leading reseachers in the fields: Bayesian spectrum analysis, Bayesian inductive inference, uncertainty and measurement, search for extra-terrestial intelligence, inductive reasoning by neural networks, information theory in biology, expert systems, generalized inhomogeneous systems, and beamforming. \textit{E. T. Jaynes} presents four papers. The first one is a beautiful, detailed tutorial on the Cox-Polya-Jaynes approach to Bayesian probability theory and the maximum-entropy principle. The second paper briefly describes the great conceptual differences, and equally great mathematical similarities, of Bayesian and ME methods. Jaynes' next paper is based on Bretthorst's contribution mentioned below. Jaynes speculates on appropriate data analysis methods, making use of probability theory to perform the optimal deconvolution of the point-spread function. In his last paper, he shows the evolution of Carnot's principle, via the lines of Kelvin, Clausius, Gibbs and Boltzmann directly from first principles of inference. In \textit{G. L. Bretthorst}'s paper, the Jaynes' approach to periodograms is studied as a sufficient statistic for determining the spectrum of a time sampled data set containing a single stationary frequency. He extends this analysis and explicitly calculates the joint posterior probability that multiple frequencies are present, independent of their amplitude and phase, and the noise level. Results are given for computer simulated data and for real data ranging from magnetic resonance and astronomy to economic cycles. Fourier transform methods are used for substantial improvements in resolution. The material on which \textit{M. Tribus}' paper is based, is scattered throughout the literature and seldom brought together in a coherent whole. In this presentation, many difficult steps in mathematics are skipped so as to bring out the flow of ideas. \textit{S. F. Gull} applies the principles of Bayesian reasoning to problems of inference from data sampled from Poisson, Gaussian and Cauchy distributions. Probability distributions (prior and likelihoods) are assigned in appropriate hypotheses spaces using the ME principle, and then treated via Bayes' theorem. Since Bayesian hypothesis testing requires careful consideration of the prior ranges of any parameter involved, a quantitative statement of Occams' Razor is proposed. As an example, Gull offers a solution to a problem in egression analysis: determination of the optimal number of parameters for fitting graphical data with a set of basis functions. \textit{J. Rissanen} presents an outline of a modeling principle based upon the search for the shortest code length of the data. The stochastic complexity is defined. This principle is generally applicable to statistical problems, and when restricted to the special exponential family, arising in the ME formalism with a set of moment constraints, it provides a generalization which permits the treatment of constraints or their number to be optimized as well. \textit{J. Skilling} presents the ME method as a universal one for finding a ``best'' positive distribution constrained by incomplete data. The justification of this approach is based upon four axioms: subset independence, coordinate invariance, system independence and scaling. \textit{C. C. Rodriguez} makes use of a simple paradox to introduce new noninformative priors and to point out the connections between the state of knowledge of ``total ignorance'' and self-similarity. Generalization of these considerations could produce useful results for the artificial intelligence problem of knowledge representation. \textit{P. F. Fougère} presents some methods of ME calculations on a discrete probability space; B. R. Frieden's and J. Makhoul's criticisms of the ME principle are answered here. \textit{R. Blankenbecler} and \textit{M. H. Partovi} present a discussion of the determination of the quantum density matrix from realistic measurements using the ME principle. \textit{A. J. M. Garret} reviews recent suggestions on how to extend the uncertainty principle, by using the concept of information. The Heisenberg variance principle is shown to be a special case for canonically conjugate continuous variables. The possibility of further generalization is considered. \textit{M. H. Partovi} and \textit{R. Blankenbecler} present a discussion on their microstatistical formalism for multitime quantum measurements. They show that this formalism is capable of dealing with time in quantum mechanics in a rigorous way, and enables one to precisely state and derive time-energy uncertainty relations. Application to the problem of the quantum limit of accuracy of position measurements in the context of gravitational wave detection is briefly discussed. \textit{R. N. Madan} discusses the problem of estimation of parameters in a Rayleigh distribution modified to take into account additional information. A new type of entropy estimator is proposed which is the ratio of the arithmetic mean to the geometric mean, times a normalized constant. This estimator is numerically compared with some other estimators. \textit{N. C. Dalkey} describes a logic of information systems as a lattice with the information systems as elements. The partial order in this lattice is defined by a partial order in the expected payoff space. Possibilities of applications are demonstrated by means of an empirical example. The presented model is highly important for artificial intelligence methods. \textit{G. J. Klir} presents methodological principles of uncertainty: nonspecifity, fuzzines, dissonance, and confusion. Well justifid measures of these types of uncertainty are described. \textit{N. C. Dalkey} compares the minimum cross-entropy inference with minimally informative information systems. He shows that formulating inductive inference rules in terms of information systems, rather than probability distributions, is likely to generate stronger procedures. \textit{S. R. Deans} explains the connections between search for extraterrestial intelligence (SETI), Radon transforms, and optimization. He reports recent observations by the NASA-SETI team using prototype SETI hardware, and outlines a data analysis problem facing the Microwave Observing Project; furthermore he gives some preliminary results that may prove useful for the solution of the data analysis problem. An application of SETI, for processing a prodigious amount of data in real time, is shown. \textit{D. Hestenes} discusses underlying network design principles and mechanisms of human perception, and explains their relation to theories of signal processing and statistical inference. \textit{C. C. Rodriguez} studies asymptotic efficiency in nonparametric models. He is concerned with nonparametric models where the set of elementary events is a fixed compact subset of the real line and involves a fixed bounded function. \textit{Y. Tikochinsky} and \textit{S. Shalitin} review the Wigner formulation of quantum mechanics in phase space. Using this formulation, the classical limit of the quantum mechanical description of a system characterized by a given sharp value of an observable, or by a given expectation value of the same obsevable, is discussed. \textit{A. K. Rajagopal}, \textit{P. J. Lin-Chung} and \textit{S. Teitler} consider superposition of probability amplitudes whose squared magnitude represents a probability density. Their particular interest is in characterization of weighting and interference effects as revealed by the properties of the corresponding differential entropy and Kullback-Leibler information. Some inequalities arising from both these effects are established. In \textit{L. H. Schick}'s paper, the ME principle is combined with the usual form of the variational principle in quantum mechanics to obtain super-variational principles, which may be used when the form of the potential energy operator is partially, or wholly unknown. Details are worked out for a few simple bound-state problems. Possible generalizations are discussed. \textit{A. K. Rajagopal} and \textit{S. Teitler} show explicitly for the case of particle statistics that the Boltzmann principle is an approximation to Einstein's reversal when the entropy is the Shannon entropy in an appropriate physical context. \textit{C. T. Lee} proves that the classical entropy of a coherent state of an N-spin system is a local minimum. Here it is also conjectured that the ME is attained when the N points form a regular polyhedron. \textit{A. K. Rajagopal} and \textit{S. Teitler} present a further paper devoted to an approach to a density matrix form of the Heisenberg uncertainty relation which is to apply the ME principle for the density matrix subject to the given dispersions as constraints. They examine this approach from a somewhat different view-point to show several novel consequences of this formulation. Their principal interest is in the discussion of minimum uncertainty coherent states. In \textit{J. Skilling}'s paper, the direct f log f and indirect log f entropy formulae are used in ME spectroscopy. The direct form is shown to be appropriate for finding the single `best' spectrum from incomplete data. The author says that the indirect form should be used to find an underlying probability distribution function, but not the spectrum itself. Examples show how and why the indirect form is liable to give misleading sharp spectra. \textit{T. D. Schneider} suggests that the destructive tendency of the entropy to increase in isolated systems may also apply to the genetic material. \textit{R. K. Bryan} shows areas where the ME method may prove useful in protein crystallography structure determination because the application of classical methods of phase determination is not always possible. In his next paper he shows how the deconvolution of X-ray crystallography images can be achieved by using the ME method, where prior knowledge of the particle radius is imposed. The deconvolution is combined with the Radon problem, so that the averaged radial density distribution of the particle is calculated, and the parameters defining the contrast transfer function are also refined as they are not known precisely. \textit{R. G. Currie} studies climatically induced cyclic variations in United States' crop production. He explains why evidence for the ``Kuznet's long swing'' effect in aggregate economic data began to deteriorate after the turn of the century. \textit{A. Lippman} considers ME methods for the construction of expert systems. He shows that the construction of an expert system is equivalent to the minimization of a convex function in as many dimensions as there are pieces of knowledge supplied by the system. \textit{S. Geman} presents an introduction to stochastic relaxation, a highly parallel computational algorithm for various inference and optimization problems. The presentation is given by examples highlighting possible applications to image processing and expert systems. \textit{S. A. Farrow} and \textit{F. P. Ottensmeyer} show a method of bias correction in electron microscope images. This method is introduced by the use of the error fitting method in conjunction with an ME processing algorithm. \textit{S. A. Goldman} and \textit{R. L. Rivest} present a new way to compute probability distributions with ME satisfying a set of constraints. This method is integrated with the planning of data collection and tabulation. They show how adding constraints and performing the associated additional tabulations can substantially speed up computation by replacing the usual iterative techniques with a straight-forward computation. \textit{A. Mohammad-Djafari} and \textit{G. Demoment} show that the ME approach is appropriate for solving some inverse problems arising at different levels of various image restoration and reconstruction problems. \textit{M. J. Miller} shows that, for the class of likelihood problems resulting from a complete- incomplete data specification in which the complete data \(\bar x\) are not uniquely determined by the measured incomplete data \(\bar y\) via some many-to-one set of mappings \(\bar y=h(\bar x)\), the density which maximizes the entropy is identical to the conditional density of the complete data given the incomplete data which would be derived via rules of conditional probability. It is demonstrated that for the problem of spectrum estimation from finite data sets, this view results in the derivation of maximum-likelihood estimates of the Toeplitz constrained covariance parameters via an iterative maximization of the likelihood function. \textit{K. L. Ngai}, \textit{A. K. Rajagopal} and \textit{S. Teitler} introduce the concept of epoch entropy for relaxation processes. The relations of empirical rules and evaluations of epoch entropy to some existing models of relaxations are indicated. \textit{S. A. Trugman} presents the application of ME methods to the study of inhomogeneous systems. Both the structural and transport properties of inhomogeneous systems are considered. \textit{P. M. Doyen} uses ME methods to infer the dimensions of cracks in granit rocks from measurements of their hydraulic permeability and of their electrical conductivity. \textit{J. H. Root}, \textit{P. A. Egelstaff} and \textit{B. G. Nickel} describe the application of ME methods for reducing truncation effects in the three-dimensional inverse Fourier transforms for liquid diffraction data. \textit{K. H. Norsworthy} and \textit{P. N. Michels} show that the high sidelobes of random array beams arise principally from the mishandling of absent data. New random array beamforming methods are described. Finally, \textit{Y. Cheng} and \textit{R. L. Kashyap} compare the evidence combination given the Bayesian method and Dempster's rule. They show that the Dempster's method is narrower than that of Bayes. All contributions in these volumes are of high level and many results are important for further theoretical and empirical investigations.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    Bayesian Methods
    0 references
    Maximum-entropy
    0 references
    Entropy
    0 references
    Workshop
    0 references
    Laramie, WY (USA)
    0 references
    Seattle, WA (USA)
    0 references
    Bayesian spectrum analysis
    0 references
    Bayesian inductive inference
    0 references
    uncertainty
    0 references
    measurement
    0 references
    inductive reasoning by neural networks
    0 references
    information theory
    0 references
    biology
    0 references
    expert systems
    0 references
    generalized inhomogeneous systems
    0 references
    beamforming
    0 references
    maximum-entropy principle
    0 references
    ME methods
    0 references
    data analysis
    0 references
    deconvolution
    0 references
    evolution of Carnot's principle
    0 references
    periodograms
    0 references
    sufficient statistic
    0 references
    Fourier transform methods
    0 references
    Bayes' theorem
    0 references
    Bayesian hypothesis testing
    0 references
    fitting graphical data
    0 references
    shortest code length
    0 references
    stochastic complexity
    0 references
    exponential family
    0 references
    incomplete data
    0 references
    subset independence
    0 references
    coordinate invariance
    0 references
    system independence
    0 references
    scaling
    0 references
    new noninformative priors
    0 references
    knowledge representation
    0 references
    quantum density matrix
    0 references
    uncertainty principle
    0 references
    Heisenberg variance principle
    0 references
    microstatistical formalism
    0 references
    multitime quantum measurements
    0 references
    gravitational wave detection
    0 references
    Rayleigh distribution
    0 references
    entropy estimator
    0 references
    logic of information systems
    0 references
    lattice
    0 references
    nonspecifity
    0 references
    fuzzines
    0 references
    dissonance
    0 references
    confusion
    0 references
    minimum cross-entropy inference
    0 references
    minimally informative information systems
    0 references
    search for extraterrestial intelligence
    0 references
    Radon transforms
    0 references
    optimization
    0 references
    network design principles
    0 references
    mechanisms of human perception
    0 references
    signal processing
    0 references
    asymptotic efficiency
    0 references
    nonparametric models
    0 references
    Wigner formulation of quantum mechanics
    0 references
    phase space
    0 references
    differential entropy
    0 references
    Kullback-Leibler information
    0 references
    variational principle
    0 references
    super-variational principles
    0 references
    particle statistics
    0 references
    Boltzmann principle
    0 references
    Shannon entropy
    0 references
    Heisenberg uncertainty relation
    0 references
    minimum uncertainty coherent states
    0 references
    ME spectroscopy
    0 references
    X-ray crystallography
    0 references
    stochastic relaxation
    0 references
    image processing
    0 references
    bias correction
    0 references
    electron microscope images
    0 references
    inverse problems
    0 references
    image restoration and reconstruction
    0 references
    spectrum estimation
    0 references
    maximum-likelihood estimates
    0 references
    epoch entropy
    0 references
    relaxation processes
    0 references
    three- dimensional inverse Fourier transforms for liquid diffraction data
    0 references
    random array beams
    0 references
    Dempster's rule
    0 references