Decision theoretic generalizations of the PAC model for neural net and other learning applications (Q1198550): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Q5369125 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Rates of growth and sample moduli for weighted empirical processes indexed by sets / rank
 
Normal rank
Property / cites work
 
Property / cites work: Queries and concept learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Fast probabilistic algorithms for Hamiltonian circuits and matchings / rank
 
Normal rank
Property / cites work
 
Property / cites work: Erratum: A result of Vapnik with applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: Density and dimension / rank
 
Normal rank
Property / cites work
 
Property / cites work: Pattern-recognizing stochastic learning automata / rank
 
Normal rank
Property / cites work
 
Property / cites work: Statistical decision theory and Bayesian analysis. 2nd ed / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3794956 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learnability and the Vapnik-Chervonenkis dimension / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3327527 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4031423 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Information-theoretic asymptotics of Bayes methods / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3813319 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Automatic pattern recognition: a study of the probability of error / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4403756 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Central limit theorems for empirical measures / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3217346 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Universal Donsker classes and metric entropy / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3772828 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A general lower bound on the number of examples needed for learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5186444 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5532825 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Quantifying inductive bias: AI learning algorithms and Valiant's learning framework / rank
 
Normal rank
Property / cites work
 
Property / cites work: Equivalence of models for polynomial learnability / rank
 
Normal rank
Property / cites work
 
Property / cites work: Predicting \(\{ 0,1\}\)-functions on randomly drawn points / rank
 
Normal rank
Property / cites work
 
Property / cites work: \(\epsilon\)-nets and simplex range queries / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3029980 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5529067 / rank
 
Normal rank
Property / cites work
 
Property / cites work: The 1988 Wald Memorial Lectures: The present position in Bayesian statistics. With comments and a rejoinder by the author / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4740120 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3998435 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Probably Approximate Learning over Classes of Distributions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4013527 / rank
 
Normal rank
Property / cites work
 
Property / cites work: U-processes: Rates of convergence / rank
 
Normal rank
Property / cites work
 
Property / cites work: Generalization performance of Bayes optimal classification algorithm for learning a perceptron / rank
 
Normal rank
Property / cites work
 
Property / cites work: Regularization Algorithms for Learning That Are Equivalent to Multilayer Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence of stochastic processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4001821 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5606305 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic complexity and modeling / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the density of families of sets / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3290875 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sharper bounds for Gaussian and empirical processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: A theory of the learnable / rank
 
Normal rank
Property / cites work
 
Property / cites work: Estimation of dependences based on empirical data. Transl. from the Russian by Samuel Kotz / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4013523 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities / rank
 
Normal rank
Property / cites work
 
Property / cites work: Some special Vapnik-Chervonenkis classes / rank
 
Normal rank

Latest revision as of 14:12, 16 May 2024

scientific article
Language Label Description Also known as
English
Decision theoretic generalizations of the PAC model for neural net and other learning applications
scientific article

    Statements

    Decision theoretic generalizations of the PAC model for neural net and other learning applications (English)
    0 references
    0 references
    16 January 1993
    0 references
    The paper introduces an extension of the probably approximately correct model of learing from examples. The class of functions on an instance space that usually are defined only on the set \(\{0,1\}\) has been changed so that values are members of arbitrary sets. The new extended model is based on statistical decision theory. The results of the paper are based on a proposed generalized notion of dimension of Vapnik-Chervonenkis that is applicable to real-valued functions. Due to the extended model, it is possible to formulate distribution-independent upper bounds on the size of the instance space for learning in feed-forward neural networks.
    0 references
    PAC learning
    0 references
    Vapnik-Chervonenkis dimension
    0 references
    neural networks
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers