Generalization bounds for non-stationary mixing processes
From MaRDI portal
Publication:2360972
DOI10.1007/S10994-016-5588-2zbMATH Open1412.68186OpenAlexW2528961511MaRDI QIDQ2360972FDOQ2360972
Authors: Vitaly Kuznetsov, Mehryar Mohri
Publication date: 29 June 2017
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-016-5588-2
Recommendations
- Generalization bounds for time series prediction with non-stationary processes
- Stability bounds for stationary \(\varphi \)-mixing and \(\beta \)-mixing processes
- Nonparametric time series prediction through adaptive model selection
- Discrepancy-based theory and algorithms for forecasting non-stationary time series
- Learning theory estimates with observations from general stationary stochastic processes
time seriesmixingMarkov processesfast ratesnon-stationary processesgeneralization boundslocal Rademacher complexityasymptotic stationarityunbounded loss
Cites Work
- Mixing: Properties and examples
- Stability bounds for stationary \(\varphi \)-mixing and \(\beta \)-mixing processes
- The Generalization Ability of Online Algorithms for Dependent Data
- Prediction of time series by statistical learning: general losses and fast rates
- Real Analysis and Probability
- Title not available (Why is that?)
- Local Rademacher complexities
- Title not available (Why is that?)
- Chromatic PAC-Bayes bounds for non-IID data: applications to ranking and stationary \(\beta \)-mixing processes
- Some Limit Theorems for Random Functions. I
- Title not available (Why is that?)
- Learning without concentration
- Fluid queues with level dependent evolution
- Foundations of machine learning
- Rates of convergence for empirical processes of stationary mixing sequences
- Title not available (Why is that?)
- A Glivenko-Cantelli theorem for exchangeable random variables
- Nonparametric time series prediction through adaptive model selection
- New analysis and algorithm for learning with drifting distributions
- Model selection for weakly dependent time series forecasting
- A new convex objective function for the supervised learning of single-layer neural networks
- Sequential complexities and uniform martingale laws of large numbers
- Weak convergence of partial sums of absolutely regular sequences
- On the complexity of learning from drifting distributions
- Relative deviation learning bounds and generalization with unbounded loss functions
- Generalization bounds for time series prediction with non-stationary processes
Cited In (17)
- Nonparametric time series prediction through adaptive model selection
- Exponential inequalities for nonstationary Markov chains
- Empirical risk minimization and complexity of dynamical models
- Finite time identification in unstable linear systems
- Hold-out estimates of prediction models for Markov processes
- Stability bounds for stationary \(\varphi \)-mixing and \(\beta \)-mixing processes
- On the sample complexity of the linear quadratic regulator
- Generalization bounds for time series prediction with non-stationary processes
- Title not available (Why is that?)
- Detecting virtual concept drift of regressors without ground truth values
- Discrepancy-based theory and algorithms for forecasting non-stationary time series
- Adaptive deep learning for nonlinear time series models
- Chromatic PAC-Bayes bounds for non-IID data: applications to ranking and stationary \(\beta \)-mixing processes
- Nonparametric risk bounds for time-series forecasting
- A generalization bound of deep neural networks for dependent data
- Quantitative bounds for concentration-of-measure inequalities and empirical regression: the independent case
- Drift estimation for a multi-dimensional diffusion process using deep neural networks
This page was built for publication: Generalization bounds for non-stationary mixing processes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2360972)