Scale-sensitive dimensions, uniform convergence, and learnability
From MaRDI portal
Publication:4377590
DOI10.1145/263867.263927zbMath0891.68086OpenAlexW2017753243WikidataQ59538624 ScholiaQ59538624MaRDI QIDQ4377590
Nicolò Cesa-Bianchi, Noga Alon, David Haussler, Shai Ben-David
Publication date: 17 February 1998
Published in: Journal of the ACM (Search for Journal in Brave)
Full work available at URL: http://www.acm.org/pubs/contents/journals/jacm/1997-44/
Related Items (69)
Asymptotic normality with small relative errors of posterior probabilities of half-spaces ⋮ IAN: Iterated Adaptive Neighborhoods for Manifold Learning and Dimensionality Estimation ⋮ The shattering dimension of sets of linear functionals. ⋮ Time series prediction using support vector machines, the orthogonal and the regularized orthogonal least-squares algorithms ⋮ Efficient algorithms for learning functions with bounded variation ⋮ Tight upper bounds for the discrepancy of half-spaces ⋮ Estimates of covering numbers of convex sets with slowly decaying orthogonal subsets ⋮ Application of integral operator for regularized least-square regression ⋮ Approximation and learning of convex superpositions ⋮ \(L_{p}\)-norm Sauer-Shelah lemma for margin multi-category classifiers ⋮ On the value of partial information for learning from examples ⋮ Simulation-based optimization of Markov decision processes: an empirical process theory approach ⋮ Tighter guarantees for the compressive multi-layer perceptron ⋮ The learnability of quantum states ⋮ Statistical performance of support vector machines ⋮ Optimal granularity selection based on algorithm stability with application to attribute reduction in rough set theory ⋮ VC bounds on the cardinality of nearly orthogonal function classes ⋮ Robustness and generalization ⋮ Uncertainty learning of rough set-based prediction under a holistic framework ⋮ Robust regression using biased objectives ⋮ High-probability minimax probability machines ⋮ Inapproximability of Truthful Mechanisms via Generalizations of the Vapnik--Chervonenkis Dimension ⋮ The universal Glivenko-Cantelli property ⋮ Learning with stochastic inputs and adversarial outputs ⋮ The generalization performance of ERM algorithm with strongly mixing observations ⋮ A note on different covering numbers in learning theory. ⋮ Vapnik-Chervonenkis type conditions and uniform Donsker classes of functions ⋮ Kernel methods in machine learning ⋮ Obtaining fast error rates in nonconvex situations ⋮ Generalization performance of least-square regularized regression algorithm with Markov chain samples ⋮ The performance bounds of learning machines based on exponentially strongly mixing sequences ⋮ A tight upper bound on the generalization error of feedforward neural networks ⋮ Neural networks with quadratic VC dimension ⋮ Sample Complexity of Classifiers Taking Values in ℝQ, Application to Multi-Class SVMs ⋮ Boosting conditional probability estimators ⋮ Optimal convergence rate of the universal estimation error ⋮ Aspects of discrete mathematics and probability in the theory of machine learning ⋮ Sequential complexities and uniform martingale laws of large numbers ⋮ Small size quantum automata recognizing some regular languages ⋮ Approximation of frame based missing data recovery ⋮ A note on a scale-sensitive dimension of linear bounded functionals in Banach spaces ⋮ Analysis of a multi-category classifier ⋮ Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization ⋮ Application of integral operator for vector-valued regression learning ⋮ Rademacher Chaos Complexities for Learning the Kernel Problem ⋮ Regularization and statistical learning theory for data analysis. ⋮ Comments on: Support vector machines maximizing geometric margins for multi-class classification ⋮ Multi-category classifiers and sample width ⋮ VC Dimension, Fat-Shattering Dimension, Rademacher Averages, and Their Applications ⋮ Classes of Functions Related to VC Properties ⋮ On Martingale Extensions of Vapnik–Chervonenkis Theory with Applications to Online Learning ⋮ Measuring the Capacity of Sets of Functions in the Analysis of ERM ⋮ Theory of Classification: a Survey of Some Recent Advances ⋮ A tutorial on ν‐support vector machines ⋮ Entropy conditions for \(L_{r}\)-convergence of empirical processes ⋮ A graph-theoretic generalization of the Sauer-Shelah lemma ⋮ Scale-sensitive dimensions and skeleton estimates for classification ⋮ On the VC-dimension and boolean functions with long runs ⋮ Prediction, learning, uniform convergence, and scale-sensitive dimensions ⋮ Subsampling in Smoothed Range Spaces ⋮ Efficient learning with robust gradient descent ⋮ Maximal width learning of binary functions ⋮ Unnamed Item ⋮ Distribution-free consistency of empirical risk minimization and support vector regression ⋮ Integer cells in convex sets ⋮ Unnamed Item ⋮ Learnability in Hilbert spaces with reproducing kernels ⋮ Primal and dual combinatorial dimensions ⋮ Rates of uniform convergence of empirical means with mixing processes
This page was built for publication: Scale-sensitive dimensions, uniform convergence, and learnability