Efficient distribution-free learning of probabilistic concepts
From MaRDI portal
Publication:1329154
DOI10.1016/S0022-0000(05)80062-5zbMath0822.68093MaRDI QIDQ1329154
Michael Kearns, Robert E. Schapire
Publication date: 9 October 1995
Published in: Journal of Computer and System Sciences (Search for Journal in Brave)
68T05: Learning and adaptive systems in artificial intelligence
Related Items
A note on a scale-sensitive dimension of linear bounded functionals in Banach spaces, Regularization and statistical learning theory for data analysis., Computational sample complexity and attribute-efficient learning, Knows what it knows: a framework for self-aware learning, Robustness and generalization, An algorithmic theory of learning: robust concepts and random projection, Partial observability and learnability, Maximal width learning of binary functions, A graph-theoretic generalization of the Sauer-Shelah lemma, Scale-sensitive dimensions and skeleton estimates for classification, Prediction, learning, uniform convergence, and scale-sensitive dimensions, Learning with restricted focus of attention, On the boosting ability of top-down decision tree learning algorithms, On-line maximum likelihood prediction with respect to general loss functions, Approximation and learning of convex superpositions, Learning from examples with unspecified attribute values., Approximate location of relevant variables under the crossover distribution., Structural results about exact learning with unspecified attribute values, Learnability in Hilbert spaces with reproducing kernels, Learning fixed-dimension linear thresholds from fragmented data, PAC learning of probability distributions over a discrete domain., Efficient algorithms for learning functions with bounded variation, Uniform approximation of Vapnik-Chervonenkis classes, Sequential complexities and uniform martingale laws of large numbers, Aspects of discrete mathematics and probability in the theory of machine learning, Links between probabilistic automata and hidden Markov models: probability distributions, learning models and induction algorithms, An algorithmic theory of learning: Robust concepts and random projection, Integer cells in convex sets, Sample Complexity of Classifiers Taking Values in ℝQ, Application to Multi-Class SVMs
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Occam's razor
- Fast probabilistic algorithms for Hamiltonian circuits and matchings
- Estimation of dependences based on empirical data. Transl. from the Russian by Samuel Kotz
- Equivalence of models for polynomial learnability
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- A learning criterion for stochastic rules
- Central limit theorems for empirical measures
- Toward efficient agnostic learning
- Learnability and the Vapnik-Chervonenkis dimension
- A theory of the learnable
- Computational limitations on learning from examples
- Probability Inequalities for Sums of Bounded Random Variables
- Fuzzy sets
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Convergence of stochastic processes