Fast rates for general unbounded loss functions: from ERM to generalized Bayes
From MaRDI portal
Publication:4969103
Recommendations
- Posterior concentration and fast convergence rates for generalized Bayesian learning
- Relative deviation learning bounds and generalization with unbounded loss functions
- Fast rates in statistical and online learning
- Risk bounds for statistical learning
- Fast learning rates in statistical inference through aggregation
Cites work
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 42272 (Why is no real title available?)
- scientific article; zbMATH DE number 45100 (Why is no real title available?)
- scientific article; zbMATH DE number 47310 (Why is no real title available?)
- scientific article; zbMATH DE number 823069 (Why is no real title available?)
- $f$ -Divergence Inequalities
- 10.1162/1532443041424300
- A General Framework for Updating Belief Distributions
- A decision-theoretic extension of stochastic complexity and its applications to learning
- An asymptotic property of model selection criteria
- Bayesian fractional posteriors
- Challenging the empirical mean and empirical variance: a deviation study
- Convergence rates of posterior distributions for non iid observations
- Convergence rates of posterior distributions.
- Convex Analysis
- Efficient agnostic learning of neural networks with bounded fan-in
- Empirical Bayes posterior concentration in sparse high-dimensional linear models
- Empirical minimization
- Fast learning rates for plug-in classifiers
- Fast learning rates in statistical inference through aggregation
- Fast rates in statistical and online learning
- Follow the leader if you can, hedge if you must
- From -entropy to KL-entropy: analysis of minimum information complexity density estima\-tion
- Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory
- Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it
- Information Consistency of Nonparametric Gaussian Process Methods
- Information-theoretic determination of minimax rates of convergence
- Learning by mirror averaging
- Learning without concentration
- Local Rademacher complexities
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Loss minimization and parameter estimation with heavy tails
- Minimum complexity density estimation
- Minimum contrast estimators on sieves: Exponential bounds and rates of convergence
- Misspecification in infinite-dimensional Bayesian statistics
- Model selection for Gaussian regression with random design
- Mutual information, metric entropy and cumulative relative entropy risk
- Nonparametric Bayesian model selection and averaging
- On Bayesian consistency
- On Information and Sufficiency
- On aggregation for heavy-tailed classes
- Optimal aggregation of classifiers in statistical learning.
- Optimal learning with \textit{Q}-aggregation
- PAC-Bayesian stochastic model selection
- Prediction, Learning, and Games
- Probability inequalities for likelihood ratios and convergence rates of sieve MLEs
- Regularization, sparse recovery, and median-of-means tournaments
- Relative deviation learning bounds and generalization with unbounded loss functions
- Rigorous learning curve bounds from statistical mechanics
- Robust Bayesian inference via coarsening
- Rényi Divergence and Kullback-Leibler Divergence
- The consistency of posterior distributions in nonparametric problems
- The safe Bayesian. Learning the learning rate via the mixability gap
- The semiparametric Bernstein-von Mises theorem
Cited in
(17)- User-friendly Introduction to PAC-Bayes Bounds
- Generalized Bayes approach to inverse problems with model misspecification
- Anytime-Valid Tests of Conditional Independence Under Model-X
- Relative deviation learning bounds and generalization with unbounded loss functions
- ERM learning with unbounded sampling
- Empirical Bayes inference in sparse high-dimensional generalized linear models
- Posterior consistency for the spectral density of non‐Gaussian stationary time series
- Minimax rates for conditional density estimation via empirical entropy
- Relaxing the i.i.d. assumption: adaptively minimax optimal regret via root-entropic regularization
- The no-free-lunch theorems of supervised learning
- Fast rates in statistical and online learning
- Bayesian functional registration of fMRI activation maps
- A comparison of learning rate selection methods in generalized Bayesian inference
- Posterior concentration and fast convergence rates for generalized Bayesian learning
- Gibbs posterior concentration rates under sub-exponential type losses
- Gibbs posterior convergence and the thermodynamic formalism
- Bernstein-von Mises theorem and misspecified models: a review
This page was built for publication: Fast rates for general unbounded loss functions: from ERM to generalized Bayes
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4969103)