Empirical entropy, minimax regret and minimax risk
From MaRDI portal
(Redirected from Publication:520672)
Abstract: We consider the random design regression model with square loss. We propose a method that aggregates empirical minimizers (ERM) over appropriately chosen random subsets and reduces to ERM in the extreme case, and we establish sharp oracle inequalities for its risk. We show that, under the growth of the empirical -entropy, the excess risk of the proposed method attains the rate for and for where is the sample size. Furthermore, for , the excess risk rate matches the behavior of the minimax risk of function estimation in regression problems under the well-specified model. This yields a conclusion that the rates of statistical estimation in well-specified models (minimax risk) and in misspecified models (minimax regret) are equivalent in the regime . In other words, for the problem of statistical learning enjoys the same minimax rate as the problem of statistical estimation. On the contrary, for we show that the rates of the minimax regret are, in general, slower than for the minimax risk. Our oracle inequalities also imply the rates for Vapnik-Chervonenkis type classes of dimension without the usual convexity assumption on the class; we show that these rates are optimal. Finally, for a slightly modified method, we derive a bound on the excess risk of -sparse convex aggregation improving that of Lounici [Math. Methods Statist. 16 (2007) 246-259] and providing the optimal rate.
Recommendations
- Empirical risk minimization is optimal for the convex aggregation problem
- Fast learning rates in statistical inference through aggregation
- Sparsity in penalized empirical risk minimization
- Learning Theory and Kernel Machines
- On the optimality of the empirical risk minimization procedure for the convex aggregation problem
Cited in
(20)- Bayesian fractional posteriors
- Bounding the expectation of the supremum of empirical processes indexed by Hölder classes
- Active learning for cost-sensitive classification
- Suboptimality of constrained least squares and improvements via non-linear predictors
- Orthogonal statistical learning
- scientific article; zbMATH DE number 7370563 (Why is no real title available?)
- Bypassing the Monster: A Faster and Simpler Optimal Algorithm for Contextual Bandits Under Realizability
- On least squares estimation under heteroscedastic and heavy-tailed errors
- Convergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesics
- Isotonic regression with unknown permutations: statistics, computation and adaptation
- Distribution-free robust linear regression
- Empirical variance minimization with applications in variance reduction and optimal control
- Isotonic regression in general dimensions
- scientific article; zbMATH DE number 63418 (Why is no real title available?)
- Minimax rates for conditional density estimation via empirical entropy
- Relaxing the i.i.d. assumption: adaptively minimax optimal regret via root-entropic regularization
- Set structured global empirical risk minimizers are rate optimal in general dimensions
- Optimal functional supervised classification with separation condition
- Deep learning: a statistical viewpoint
- Entropy Maximization for Partially Observable Markov Decision Processes
This page was built for publication: Empirical entropy, minimax regret and minimax risk
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q520672)