Relative deviation learning bounds and generalization with unbounded loss functions
From MaRDI portal
Publication:1714946
DOI10.1007/s10472-018-9613-yzbMath1442.68194arXiv1310.5796OpenAlexW2963449842MaRDI QIDQ1714946
Corinna Cortes, Spencer Greenberg, Mehryar Mohri
Publication date: 1 February 2019
Published in: Annals of Mathematics and Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1310.5796
learning theorymachine learninggeneralization boundsimportance weightingunbounded loss functionsrelative deviation boundsunbounded regression
Related Items
Generalization bounds for non-stationary mixing processes, Unnamed Item, Improving reinforcement learning algorithms: Towards optimal learning rate policies, Multiple-source adaptation theory and algorithms, Relative utility bounds for empirically optimal portfolios
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Direct importance estimation for covariate shift adaptation
- Risk bounds for statistical learning
- Fast rates for support vector machines using Gaussian kernels
- Universal Donsker classes and metric entropy
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- The asymptotic distribution of the supremum of the standardized empirical distribution function on subintervals
- A result of Vapnik with applications
- Sharper bounds for Gaussian and empirical processes
- Symmetrization approach to concentration inequalities for empirical processes.
- Improving predictive inference under covariate shift by weighting the log-likelihood function
- Asymptotics via empirical processes. With comments and a rejoinder by the author
- Empirical margin distributions and bounding the generalization error of combined classifiers
- Concentration inequalities, large and moderate deviations for self-normalized empirical processes
- A theory of learning from different domains
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Tight lower bound on the probability of a binomial exceeding its expectation
- Weighted sums of certain dependent random variables
- On the density of families of sets
- Learning without Concentration
- Theory of Classification: a Survey of Some Recent Advances
- Sample Selection Bias Correction Theory
- 10.1162/1532443041424300
- Neural Network Learning
- Learning Theory and Kernel Machines
- Probability Inequalities for Sums of Bounded Random Variables
- Estimation of Dependences Based on Empirical Data
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Convergence of stochastic processes
- Model selection and error estimation