Relative deviation learning bounds and generalization with unbounded loss functions
From MaRDI portal
Abstract: We present an extensive analysis of relative deviation bounds, including detailed proofs of two-sided inequalities and their implications. We also give detailed proofs of two-sided generalization bounds that hold in the general case of unbounded loss functions, under the assumption that a moment of the loss is bounded. These bounds are useful in the analysis of importance weighting and other learning tasks such as unbounded regression.
Recommendations
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- scientific article; zbMATH DE number 5926544
- STABILITY RESULTS IN LEARNING THEORY
- Concentration estimates for learning with unbounded sampling
- On uniform deviations of general empirical risks with unboundedness, dependence, and high dimensionality
Cites work
- scientific article; zbMATH DE number 2089353 (Why is no real title available?)
- scientific article; zbMATH DE number 5957262 (Why is no real title available?)
- scientific article; zbMATH DE number 1804106 (Why is no real title available?)
- scientific article; zbMATH DE number 3883309 (Why is no real title available?)
- scientific article; zbMATH DE number 4170917 (Why is no real title available?)
- scientific article; zbMATH DE number 5547839 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 1552503 (Why is no real title available?)
- 10.1162/1532443041424300
- A result of Vapnik with applications
- A theory of learning from different domains
- Asymptotics via empirical processes. With comments and a rejoinder by the author
- Concentration inequalities, large and moderate deviations for self-normalized empirical processes
- Convergence of stochastic processes
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Direct importance estimation for covariate shift adaptation
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Estimation of Dependences Based on Empirical Data
- Fast rates for support vector machines using Gaussian kernels
- Improving predictive inference under covariate shift by weighting the log-likelihood function
- Learning Theory and Kernel Machines
- Learning without concentration
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Model selection and error estimation
- Neural Network Learning
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- On the density of families of sets
- Probability Inequalities for Sums of Bounded Random Variables
- Risk bounds for statistical learning
- Sample Selection Bias Correction Theory
- Sharper bounds for Gaussian and empirical processes
- Symmetrization approach to concentration inequalities for empirical processes.
- The asymptotic distribution of the supremum of the standardized empirical distribution function on subintervals
- Theory of Classification: a Survey of Some Recent Advances
- Tight lower bound on the probability of a binomial exceeding its expectation
- Universal Donsker classes and metric entropy
- Weighted sums of certain dependent random variables
Cited in
(12)- Generalization bounds for non-stationary mixing processes
- scientific article; zbMATH DE number 5926544 (Why is no real title available?)
- Relative utility bounds for empirically optimal portfolios
- Relative loss bounds for temporal-difference learning
- scientific article; zbMATH DE number 1390070 (Why is no real title available?)
- Improving reinforcement learning algorithms: Towards optimal learning rate policies
- Learning Theory
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- On uniform deviations of general empirical risks with unboundedness, dependence, and high dimensionality
- Best-effort adaptation
- scientific article; zbMATH DE number 6118058 (Why is no real title available?)
- Multiple-source adaptation theory and algorithms
This page was built for publication: Relative deviation learning bounds and generalization with unbounded loss functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1714946)