Statistical learning based on Markovian data maximal deviation inequalities and learning rates
DOI10.1007/S10472-019-09670-6zbMath1454.60112OpenAlexW2970180800WikidataQ127317625 ScholiaQ127317625MaRDI QIDQ2202513
Stéphan Clémençon, Patrice Bertail, Gabriela Ciołek
Publication date: 18 September 2020
Published in: Annals of Mathematics and Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10472-019-09670-6
empirical processunsupervised learningconcentration inequalitygeneralization boundnovelty detectionstationary probability distributionregenerative methodminimum volume setHarris positive Markov chain
Markov processes: estimation; hidden Markov models (62M05) Discrete-time Markov processes on general state spaces (60J05) Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) (60J20)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Bootstrap uniform central limit theorems for Harris recurrent Markov chains
- Model selection for weakly dependent time series forecasting
- Some limit theorems for empirical processes (with discussion)
- Sums of random variables with \(\phi\)-mixing
- Regenerative block-bootstrap for Markov chains
- Learning from dependent observations
- Uniform convergence of Vapnik-Chervonenkis classes under ergodic sampling
- Complexity-penalized estimation of minimum volume sets for dependent data
- Generalized quantile processes
- Minimum volume sets and generalized quantile processes
- Inequalities for absolutely regular sequences: application to density estimation
- Edgeworth expansions of suitably normalized sample mean statistics for atomic Markov chains
- Maximal inequalities for partial sums of \(\rho\)-mixing sequences
- Weak convergence and empirical processes. With applications to statistics
- Rosenthal-type inequalities for the maximum of partial sums of stationary processes and examples
- A renewal approach to Markovian \(U\)-statistics
- On the subspaces of \(L^p\) \((p > 2)\) spanned by sequences of independent random variables
- Exponential concentration inequalities for additive functionals of Markov chains
- Yet Another Look at Harris’ Ergodic Theorem for Markov Chains
- Generalization Bounds for Time Series Prediction with Non-stationary Processes
- The Generalization Ability of Online Algorithms for Dependent Data
- Sharp Bounds for the Tails of Functionals of Markov Chains
- A splitting technique for Harris recurrent Markov chains
- Subgeometric Rates of Convergence of f-Ergodic Markov Chains
- Rademacher penalties and structural risk minimization
- Applied Probability and Queues
- Contributions to Doeblin's theory of Markov processes
- Some applications of concentration inequalities to statistics
This page was built for publication: Statistical learning based on Markovian data maximal deviation inequalities and learning rates