A decision-theoretic extension of stochastic complexity and its applications to learning
From MaRDI portal
Publication:4701114
DOI10.1109/18.681319zbMath0935.94005OpenAlexW2105154442MaRDI QIDQ4701114
Publication date: 21 November 1999
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/18.681319
predictionupper boundslearningminimum description lengthstatistical riskstochastic complexitybatch-learning
Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05) Information theory (general) (94A15) Statistical aspects of information-theoretic topics (62B10)
Related Items (15)
Suboptimal behavior of Bayes and MDL in classification under misspecification ⋮ Learning Coefficient of Generalization Error in Bayesian Estimation and Vandermonde Matrix-Type Singularity ⋮ Unnamed Item ⋮ Predicting a binary sequence almost as well as the optimal biased coin ⋮ Singularities in mixture models and upper bounds of stochastic complexity. ⋮ Learning genetic population structures using minimization of stochastic complexity ⋮ Analysis of two gradient-based algorithms for on-line regression ⋮ Algebraic Analysis for Nonidentifiable Learning Machines ⋮ Subspace Information Criterion for Model Selection ⋮ A lower bound on compression of unknown alphabets ⋮ A Bayesian Learning Coefficient of Generalization Error and Vandermonde Matrix-Type Singularities ⋮ Asymptotic analysis of Bayesian generalization error with Newton diagram ⋮ Distributed cooperative Bayesian learning strategies. ⋮ Adaptive and self-confident on-line learning algorithms ⋮ Relative expected instantaneous loss bounds
This page was built for publication: A decision-theoretic extension of stochastic complexity and its applications to learning