The generalization performance of ERM algorithm with strongly mixing observations
From MaRDI portal
Publication:1959486
DOI10.1007/s10994-009-5104-zzbMath1470.68214OpenAlexW2058402854MaRDI QIDQ1959486
Bin Zou, Luoqing Li, Zong Ben Xu
Publication date: 7 October 2010
Published in: Machine Learning (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10994-009-5104-z
Related Items (24)
Generalization performance of Lagrangian support vector machine based on Markov sampling ⋮ Online regularized pairwise learning with non-i.i.d. observations ⋮ Generalization and learning rate of multi-class support vector classification and regression ⋮ Generalization bounds of ERM algorithm with Markov chain samples ⋮ Regression learning with non-identically and non-independently sampling ⋮ Robustness and generalization ⋮ Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains ⋮ Fast learning from \(\alpha\)-mixing observations ⋮ Convergence rate of the semi-supervised greedy algorithm ⋮ Learning bounds of ERM principle for sequences of time-dependent samples ⋮ Generalization performance of least-square regularized regression algorithm with Markov chain samples ⋮ Learning performance of regularized regression with multiscale kernels based on Markov observations ⋮ Generalization ability of online pairwise support vector machine ⋮ Prediction of time series by statistical learning: general losses and fast rates ⋮ Unnamed Item ⋮ Semi-supervised learning based on high density region estimation ⋮ System identification using kernel-based regularization: new insights on stability and consistency issues ⋮ Measuring the Capacity of Sets of Functions in the Analysis of ERM ⋮ Learning from uniformly ergodic Markov chains ⋮ Optimal rate for support vector machine regression with Markov chain samples ⋮ INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING ⋮ Generalization performance of Gaussian kernels SVMC based on Markov sampling ⋮ Extreme learning machine for ranking: generalization analysis and applications ⋮ Unnamed Item
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Probability inequalities for empirical processes and a law of the iterated logarithm
- Learning from dependent observations
- Bounds for the uniform deviation of empirical measures
- Prediction, learning, uniform convergence, and scale-sensitive dimensions
- Sharper bounds for Gaussian and empirical processes
- Rates of convergence for empirical processes of stationary mixing sequences
- A note on uniform laws of averages for dependent processes
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Rates of uniform convergence of empirical means with mixing processes
- New approaches to statistical learning theory
- An inequality for uniform deviations of sample averages from their means
- The performance bounds of learning machines based on exponentially strongly mixing sequences
- Concentration inequalities and asymptotic results for ratio type empirical processes
- On the mathematical foundations of learning
- A CENTRAL LIMIT THEOREM AND A STRONG MIXING CONDITION
- Learning Theory
- On the Generalization Ability of On-Line Learning Algorithms
- Capacity of reproducing kernel spaces in learning theory
- Mixing Conditions for Markov Chains
- On the posterior-probability estimate of the error rate of nonparametric classification rules
- Minimum complexity regression estimation with weakly dependent observations
- Scale-sensitive dimensions, uniform convergence, and learnability
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- On the Tchebychef Inequality of Bernstein
- Shannon sampling and function reconstruction from point values
- 10.1162/153244303321897690
- Convergence of stochastic processes
This page was built for publication: The generalization performance of ERM algorithm with strongly mixing observations