Minimum description length revisited

From MaRDI portal
Publication:4997077

DOI10.1142/S2661335219300018zbMATH Open1476.62020arXiv1908.08484OpenAlexW3098044966WikidataQ109278499 ScholiaQ109278499MaRDI QIDQ4997077FDOQ4997077


Authors: Peter D. Grünwald, Teemu Roos Edit this on Wikidata


Publication date: 28 June 2021

Published in: International Journal of Mathematics for Industry (Search for Journal in Brave)

Abstract: This is an up-to-date introduction to and overview of the Minimum Description Length (MDL) Principle, a theory of inductive inference that can be applied to general problems in statistics, machine learning and pattern recognition. While MDL was originally based on data compression ideas, this introduction can be read without any knowledge thereof. It takes into account all major developments since 2007, the last time an extensive overview was written. These include new methods for model selection and averaging and hypothesis testing, as well as the first completely general definition of {em MDL estimators}. Incorporating these developments, MDL can be seen as a powerful extension of both penalized likelihood and Bayesian approaches, in which penalization functions and prior distributions are replaced by more general luckiness functions, average-case methodology is replaced by a more robust worst-case approach, and in which methods classically viewed as highly distinct, such as AIC vs BIC and cross-validation vs Bayes can, to a large extent, be viewed from a unified perspective.


Full work available at URL: https://arxiv.org/abs/1908.08484




Recommendations




Cites Work


Cited In (16)

Uses Software





This page was built for publication: Minimum description length revisited

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4997077)