Prediction of time series by statistical learning: general losses and fast rates
DOI10.2478/demo-2013-0004OpenAlexW1971110023MaRDI QIDQ5417591
Olivier Wintenberger, Xiaoyin Li, Pierre Alquier
Publication date: 21 May 2014
Published in: Dependence Modeling (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1211.1847
mixingtime series forecastingstatistical learning theoryweak dependenceoracle inequalitiesfast ratesGDP forecastingPACBayesian bounds
Applications of statistics to economics (62P20) Inference from stochastic processes and prediction (62M20) Time series, auto-correlation, regression, etc. in statistics (GARCH) (62M10) Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05) Prediction theory (aspects of stochastic processes) (60G25)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Model selection for weakly dependent time series forecasting
- Deviation inequalities for sums of weakly dependent time series
- Markov chains and stochastic stability
- PAC-Bayesian bounds for randomized empirical risk minimizers
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Weakly dependent chains with infinite memory
- Learning by mirror averaging
- Learning from dependent observations
- Aggregation by exponential weighting, sharp PAC-Bayesian bounds and sparsity
- Time series: Theory and methods
- Mixing: Properties and examples
- The weighted majority algorithm
- Nonparametric time series prediction through adaptive model selection
- Concentration of measure inequalities for Markov chains and \(\Phi\)-mixing processes.
- Statistical learning theory and stochastic optimization. Ecole d'Eté de Probabilitiés de Saint-Flour XXXI -- 2001.
- Sharp oracle inequalities for aggregation of affine estimators
- PAC-Bayesian bounds for sparse regression estimation with exponential weights
- The generalization performance of ERM algorithm with strongly mixing observations
- Fast learning rates in statistical inference through aggregation
- Learning rates of regularized regression for exponentially strongly mixing sequence
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Fast learning from \(\alpha\)-mixing observations
- Aggregation for Gaussian regression
- Weak dependence. With examples and applications.
- Generalization and Robustness of Batched Weighted Average Algorithm with V-Geometrically Ergodic Markov Data
- The Generalization Ability of Online Algorithms for Dependent Data
- PAC-Bayesian Inequalities for Martingales
- Mixing properties of harris chains and autoregressive processes
- Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation
- Asymptotic evaluation of certain Markov process expectations for large time—III
- Regression Quantiles
- Memory-universal prediction of stationary random processes
- Inégalités de Hoeffding pour les fonctions lipschitziennes de suites dépendantes
- Ergodic Mirror Descent
- GARCH Models
- Sequential Quantile Prediction of Time Series
- Sparsity regret bounds for individual sequences in online linear regression
- Prediction, Learning, and Games
- Some Limit Theorems for Stationary Processes
- Gaussian model selection