The performance bounds of learning machines based on exponentially strongly mixing sequences
From MaRDI portal
Publication:2458710
DOI10.1016/j.camwa.2006.07.015zbMath1151.68600OpenAlexW2046464909MaRDI QIDQ2458710
Publication date: 2 November 2007
Published in: Computers \& Mathematics with Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.camwa.2006.07.015
uniform convergencemixing sequencelearning machinesempirical risk minimizingGeneralization performance
Stationary stochastic processes (60G10) Learning and adaptive systems in artificial intelligence (68T05) Performance evaluation, queueing, and scheduling in the context of computer systems (68M20)
Related Items (15)
The consistency of least-square regularized regression with negative association sequence ⋮ Generalization bounds of ERM algorithm with Markov chain samples ⋮ Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations ⋮ Learning rates of regularized regression for exponentially strongly mixing sequence ⋮ Learning from regularized regression algorithms with \(p\)-order Markov chain sampling ⋮ Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains ⋮ Fast learning from \(\alpha\)-mixing observations ⋮ The generalization performance of ERM algorithm with strongly mixing observations ⋮ Generalization performance of least-square regularized regression algorithm with Markov chain samples ⋮ Generalization ability of online pairwise support vector machine ⋮ RISK MINIMIZATION FOR TIME SERIES BINARY CHOICE WITH VARIABLE SELECTION ⋮ Unnamed Item ⋮ Semi-supervised learning based on high density region estimation ⋮ Measuring the Capacity of Sets of Functions in the Analysis of ERM ⋮ Learning from uniformly ergodic Markov chains
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Uniform and universal Glivenko-Cantelli classes
- Rates of convergence for empirical processes of stationary mixing sequences
- A note on uniform laws of averages for dependent processes
- Learning and generalisation. With applications to neural networks.
- Rates of uniform convergence of empirical means with mixing processes
- New approaches to statistical learning theory
- On the mathematical foundations of learning
- Mixing Conditions for Markov Chains
- Minimum complexity regression estimation with weakly dependent observations
- Scale-sensitive dimensions, uniform convergence, and learnability
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Probability Inequalities for Sums of Bounded Random Variables
- Advances in Neural Networks – ISNN 2005
- Some applications of concentration inequalities to statistics
This page was built for publication: The performance bounds of learning machines based on exponentially strongly mixing sequences