Divergence rates of Markov order estimators and their application to statistical estimation of stationary ergodic processes
From MaRDI portal
(Redirected from Publication:358133)
Abstract: Stationary ergodic processes with finite alphabets are estimated by finite memory processes from a sample, an n-length realization of the process, where the memory depth of the estimator process is also estimated from the sample using penalized maximum likelihood (PML). Under some assumptions on the continuity rate and the assumption of non-nullness, a rate of convergence in -distance is obtained, with explicit constants. The result requires an analysis of the divergence of PML Markov order estimators for not necessarily finite memory processes. This divergence problem is investigated in more generality for three information criteria: the Bayesian information criterion with generalized penalty term yielding the PML, and the normalized maximum likelihood and the Krichevsky-Trofimov code lengths. Lower and upper bounds on the estimated order are obtained. The notion of consistent Markov order estimation is generalized for infinite memory processes using the concept of oracle order estimates, and generalized consistency of the PML Markov order estimator is presented.
Recommendations
Cites work
- scientific article; zbMATH DE number 45100 (Why is no real title available?)
- scientific article; zbMATH DE number 3590180 (Why is no real title available?)
- scientific article; zbMATH DE number 3444596 (Why is no real title available?)
- scientific article; zbMATH DE number 918233 (Why is no real title available?)
- A new covariance inequality and applications.
- An application of ergodic theory to probability theory
- Bandwidth selection in nonparametric kernel testing
- Chains with infinite connections: Uniqueness and Markov representation
- Context tree estimation for not necessarily finite memory processes, via BIC and MDL
- Elements of Information Theory
- Estimating the dimension of a model
- Exponential inequalities for empirical unbounded context trees
- Fluctuations of the Empirical Entropies of a Chain of Infinite Order
- Gaussian model selection
- How sampling reveals a process
- Ideal spatial adaptation by wavelet shrinkage
- Information theory. Coding theorems for discrete memoryless systems
- Large-scale typicality of Markov sample paths and consistency of MDL order estimators
- Markov approximation and consistent estimation of unbounded probabilistic suffix trees
- Markov approximations of chains of infinite order
- Measure concentration for a class of random processes
- New dependence coefficients. Examples and applications to statistics
- On Analytic Properties of Entropy Rate
- On Rate of Convergence of Statistical Estimation of Stationary Ergodic Processes
- On the minimal penalty for Markov order estimation
- Prediction of random sequences and universal coding
- Processes with long memory: Regenerative construction and perfect simulation
- Risk bounds for model selection via penalization
- Some upper bounds for the rate of convergence of penalized likelihood context tree estimators
- Speed of \(\overline d\)-convergence for Markov approximations of chains with complete connections. A coupling approach
- The consistency of the BIC Markov order estimator.
- The minimum description length principle in coding and modeling
- The optimal error exponent for Markov order estimation
- The performance of universal encoding
- Twice-universal coding
- Universal codes as a basis for time series testing
Cited in
(1)
This page was built for publication: Divergence rates of Markov order estimators and their application to statistical estimation of stationary ergodic processes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q358133)