On the complexity of learning from drifting distributions
From MaRDI portal
Publication:1376422
DOI10.1006/inco.1997.2656zbMath0887.68093OpenAlexW2092714606MaRDI QIDQ1376422
Rakesh D. Barve, Philip M. Long
Publication date: 25 May 1998
Published in: Information and Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1006/inco.1997.2656
Related Items
Generalization bounds for non-stationary mixing processes, Learning from Non-iid Data: Fast Rates for the One-vs-All Multiclass Plug-in Classifiers, Discrepancy-based theory and algorithms for forecasting non-stationary time series, Learning with a Drifting Target Concept, Improved lower bounds for learning from noisy examples: An information-theoretic approach, A no-free-lunch theorem for multitask learning
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Prediction, learning, uniform convergence, and scale-sensitive dimensions
- Tracking the best expert
- Tracking drifting concepts by minimizing disagreements
- The weighted majority algorithm
- Sharper bounds for Gaussian and empirical processes
- Predicting \(\{ 0,1\}\)-functions on randomly drawn points
- The hardness of approximate optima in lattices, codes, and systems of linear equations
- Learning changing concepts by exploiting the structure of change
- A general lower bound on the number of examples needed for learning
- On the density of families of sets
- Probably Approximate Learning of Sets and Functions
- Present Position and Potential Developments: Some Personal Views: Statistical Theory: The Prequential Approach
- Learnability and the Vapnik-Chervonenkis dimension
- A theory of the learnable
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Convergence of stochastic processes