Statistical Inference, Occam's Razor, and Statistical Mechanics on the Space of Probability Distributions

From MaRDI portal
Publication:3125239

DOI10.1162/neco.1997.9.2.349zbMath0870.62006arXivcond-mat/9601030OpenAlexW2124641450MaRDI QIDQ3125239

Vijay Balasubramanian

Publication date: 1 September 1997

Published in: Neural Computation (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/cond-mat/9601030



Related Items

Marginal Likelihood Computation for Model Selection and Hypothesis Testing: An Extensive Review, A Fisher-Rao metric for curves using the information in edges, Counting probability distributions: Differential geometry and model selection, Relative entropy and proximity of quantum field theories, Application of the Fisher-Rao metric to structure detection, A Note on the Applied Use of MDL Approximations, Bayesian maximum entropy based algorithm for digital X-ray mammogram processing, Schwarz, Wallace, and Rissanen: Intertwining Themes in Theories of Model Selection, Coincidences and estimation of entropies of random variables with large cardinalities, The flexibility of models of recognition memory: the case of confidence ratings, Selecting amongst multinomial models: an apologia for normalized maximum likelihood, Harold Jeffreys's \textit{Theory of probability} revisited, Theoretical investigations of an information geometric approach to complexity, Bayesian Feature Selection with Strongly Regularizing Priors Maps to the Ising Model, Latent Features in Similarity Judgments: A Nonparametric Bayesian Approach, The flexibility of models of recognition memory: an analysis by the minimum-description length principle, Comparative noninformativities of quantum priors based on monotone metrics, Cooperation, competition and the emergence of criticality in communities of adaptive systems, Fluctuation-Dissipation Theorem and Models of Learning, Functional Uniform Priors for Nonlinear Modeling, Estimating Entropy Rates with Bayesian Confidence Intervals, Discrepancy risk model selection test theory for comparing possibly misspecified or nonnested models, How Many Clusters? An Information-Theoretic Perspective, Predictability, Complexity, and Learning, Complexity through nonextensivity, Model selection by normalized maximum likelihood, An empirical study of minimum description length model selection with infinite parametric complexity, Minimum message length inference of the Poisson and geometric models using heavy-tailed prior distributions, Bayes factors: Prior sensitivity and model generalizability, On the Complexity of Logistic Regression Models, On the computation of entropy prior complexity and marginal prior distribution for the Bernoulli model, Objective Bayesian estimation for the differential entropy measure under generalized half-normal distribution



Cites Work