Scale-sensitive dimensions, uniform convergence, and learnability

From MaRDI portal
Revision as of 00:16, 7 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:4377590

DOI10.1145/263867.263927zbMath0891.68086OpenAlexW2017753243WikidataQ59538624 ScholiaQ59538624MaRDI QIDQ4377590

Nicolò Cesa-Bianchi, Noga Alon, David Haussler, Shai Ben-David

Publication date: 17 February 1998

Published in: Journal of the ACM (Search for Journal in Brave)

Full work available at URL: http://www.acm.org/pubs/contents/journals/jacm/1997-44/




Related Items (69)

Asymptotic normality with small relative errors of posterior probabilities of half-spacesIAN: Iterated Adaptive Neighborhoods for Manifold Learning and Dimensionality EstimationThe shattering dimension of sets of linear functionals.Time series prediction using support vector machines, the orthogonal and the regularized orthogonal least-squares algorithmsEfficient algorithms for learning functions with bounded variationTight upper bounds for the discrepancy of half-spacesEstimates of covering numbers of convex sets with slowly decaying orthogonal subsetsApplication of integral operator for regularized least-square regressionApproximation and learning of convex superpositions\(L_{p}\)-norm Sauer-Shelah lemma for margin multi-category classifiersOn the value of partial information for learning from examplesSimulation-based optimization of Markov decision processes: an empirical process theory approachTighter guarantees for the compressive multi-layer perceptronThe learnability of quantum statesStatistical performance of support vector machinesOptimal granularity selection based on algorithm stability with application to attribute reduction in rough set theoryVC bounds on the cardinality of nearly orthogonal function classesRobustness and generalizationUncertainty learning of rough set-based prediction under a holistic frameworkRobust regression using biased objectivesHigh-probability minimax probability machinesInapproximability of Truthful Mechanisms via Generalizations of the Vapnik--Chervonenkis DimensionThe universal Glivenko-Cantelli propertyLearning with stochastic inputs and adversarial outputsThe generalization performance of ERM algorithm with strongly mixing observationsA note on different covering numbers in learning theory.Vapnik-Chervonenkis type conditions and uniform Donsker classes of functionsKernel methods in machine learningObtaining fast error rates in nonconvex situationsGeneralization performance of least-square regularized regression algorithm with Markov chain samplesThe performance bounds of learning machines based on exponentially strongly mixing sequencesA tight upper bound on the generalization error of feedforward neural networksNeural networks with quadratic VC dimensionSample Complexity of Classifiers Taking Values in ℝQ, Application to Multi-Class SVMsBoosting conditional probability estimatorsOptimal convergence rate of the universal estimation errorAspects of discrete mathematics and probability in the theory of machine learningSequential complexities and uniform martingale laws of large numbersSmall size quantum automata recognizing some regular languagesApproximation of frame based missing data recoveryA note on a scale-sensitive dimension of linear bounded functionals in Banach spacesAnalysis of a multi-category classifierLearning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimizationApplication of integral operator for vector-valued regression learningRademacher Chaos Complexities for Learning the Kernel ProblemRegularization and statistical learning theory for data analysis.Comments on: Support vector machines maximizing geometric margins for multi-class classificationMulti-category classifiers and sample widthVC Dimension, Fat-Shattering Dimension, Rademacher Averages, and Their ApplicationsClasses of Functions Related to VC PropertiesOn Martingale Extensions of Vapnik–Chervonenkis Theory with Applications to Online LearningMeasuring the Capacity of Sets of Functions in the Analysis of ERMTheory of Classification: a Survey of Some Recent AdvancesA tutorial on ν‐support vector machinesEntropy conditions for \(L_{r}\)-convergence of empirical processesA graph-theoretic generalization of the Sauer-Shelah lemmaScale-sensitive dimensions and skeleton estimates for classificationOn the VC-dimension and boolean functions with long runsPrediction, learning, uniform convergence, and scale-sensitive dimensionsSubsampling in Smoothed Range SpacesEfficient learning with robust gradient descentMaximal width learning of binary functionsUnnamed ItemDistribution-free consistency of empirical risk minimization and support vector regressionInteger cells in convex setsUnnamed ItemLearnability in Hilbert spaces with reproducing kernelsPrimal and dual combinatorial dimensionsRates of uniform convergence of empirical means with mixing processes






This page was built for publication: Scale-sensitive dimensions, uniform convergence, and learnability