On the minimal penalty for Markov order estimation
From MaRDI portal
(Redirected from Publication:718903)
Abstract: We show that large-scale typicality of Markov sample paths implies that the likelihood ratio statistic satisfies a law of iterated logarithm uniformly to the same scale. As a consequence, the penalized likelihood Markov order estimator is strongly consistent for penalties growing as slowly as log log n when an upper bound is imposed on the order which may grow as rapidly as log n. Our method of proof, using techniques from empirical process theory, does not rely on the explicit expression for the maximum likelihood estimator in the Markov case and could therefore be applicable in other settings.
Recommendations
Cites work
- scientific article; zbMATH DE number 1420699 (Why is no real title available?)
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Context tree estimation for not necessarily finite memory processes, via BIC and MDL
- Large-scale typicality of Markov sample paths and consistency of MDL order estimators
- Strongly consistent code-based identification and order estimation for constrained finite-state model classes
- The consistency of the BIC Markov order estimator.
- Weak convergence and empirical processes. With applications to statistics
Cited in
(5)- Personalized online ensemble machine learning with applications for dynamic data streams
- On universal algorithms for classifying and predicting stationary processes
- Large-scale typicality of Markov sample paths and consistency of MDL order estimators
- Model-based clustering in simple hypergraphs through a stochastic blockmodel
- Divergence rates of Markov order estimators and their application to statistical estimation of stationary ergodic processes
This page was built for publication: On the minimal penalty for Markov order estimation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q718903)