Concentration estimates for learning with unbounded sampling (Q1946480): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
Property / cites work
 
Property / cites work: Neural Network Learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Theory of Reproducing Kernels / rank
 
Normal rank
Property / cites work
 
Property / cites work: Local Rademacher complexities / rank
 
Normal rank
Property / cites work
 
Property / cites work: Probability Inequalities for the Sum of Independent Random Variables / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal rates for the regularized least-squares algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3093220 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the mathematical foundations of learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Best choices for regularization parameters in learning theory: on the bias-variance problem. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Elastic-net regularization in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Model selection for regularized least-squares algorithm in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Complexities of convex combinations and bounding the generalization error in classification / rank
 
Normal rank
Property / cites work
 
Property / cites work: Regularization in kernel learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Least-square regularized regression with non-iid sampling / rank
 
Normal rank
Property / cites work
 
Property / cites work: A note on different covering numbers in learning theory. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning theory estimates via integral operators and their approximations / rank
 
Normal rank
Property / cites work
 
Property / cites work: ONLINE LEARNING WITH MARKOV SAMPLING / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning Theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: A note on application of integral operator in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Weak convergence and empirical processes. With applications to statistics / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal learning rates for least squares regularized regression with unbounded sampling / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning rates of least-square regularized regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: Multi-kernel regularized classifiers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning with sample dependent hypothesis spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: Compactly supported positive definite radial functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning rates of regularized regression for exponentially strongly mixing sequence / rank
 
Normal rank
Property / cites work
 
Property / cites work: Leave-One-Out Bounds for Kernel Methods / rank
 
Normal rank
Property / cites work
 
Property / cites work: Capacity of reproducing kernel spaces in learning theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Derivative reproducing properties for kernel methods in learning theory / rank
 
Normal rank

Revision as of 08:23, 6 July 2024

scientific article
Language Label Description Also known as
English
Concentration estimates for learning with unbounded sampling
scientific article

    Statements

    Concentration estimates for learning with unbounded sampling (English)
    0 references
    0 references
    0 references
    15 April 2013
    0 references
    Kernel based least square regression learning with unbounded sampling processes is studied. By a moment hypothesis for unbounded sampling, an approximation assumption for the regression function, and capacity condition for hypothesis space, sharper learning rates are derived. In the error analysis, a probability inequality for unbounded random variables is introduced, and an iteration technique is used.
    0 references
    0 references
    learning theory
    0 references
    least-square regression
    0 references
    regularization in reproducing kernel Hilbert spaces
    0 references
    empirical covering number
    0 references
    concentration estimates
    0 references
    0 references
    0 references

    Identifiers