Local Rademacher complexities
From MaRDI portal
Publication:2583411
Abstract: We propose new bounds on the error of learning algorithms in terms of a data-dependent notion of complexity. The estimates we establish give optimal rates and are based on a local and empirical version of Rademacher averages, in the sense that the Rademacher averages are computed from the data, on a subset of functions with small empirical error. We present some applications to classification and prediction with convex function classes, and with kernel classes in particular.
Recommendations
Cites work
- scientific article; zbMATH DE number 2089352 (Why is no real title available?)
- scientific article; zbMATH DE number 2089354 (Why is no real title available?)
- scientific article; zbMATH DE number 5654889 (Why is no real title available?)
- scientific article; zbMATH DE number 49190 (Why is no real title available?)
- scientific article; zbMATH DE number 1246230 (Why is no real title available?)
- scientific article; zbMATH DE number 2034517 (Why is no real title available?)
- scientific article; zbMATH DE number 1552503 (Why is no real title available?)
- scientific article; zbMATH DE number 893887 (Why is no real title available?)
- 10.1162/153244303321897690
- A Bennett concentration inequality and its application to suprema of empirical processes
- A distribution-free theory of nonparametric regression
- A new approach to least-squares estimation, with applications
- A sharp concentration inequality with applications
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Advanced lectures on machine learning. Machine learning summer school 2002, Canberra, Australia, February 11--22, 2002. Revised lectures
- Asymptotic Statistics
- Complexity regularization via localized random penalties
- Concentration inequalities using the entropy method
- Convergence of stochastic processes
- Convexity, Classification, and Risk Bounds
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- Empirical minimization
- Improving the sample complexity using global data
- Model selection and error estimation
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Rademacher averages and phase transitions in Glivenko-Cantelli classes
- Rademacher penalties and structural risk minimization
- Sharper bounds for Gaussian and empirical processes
- Smooth discrimination analysis
- Some applications of concentration inequalities to statistics
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- The importance of convexity in learning with squared loss
- Une inégalité de Bennett pour les maxima de processus empiriques. (A Bennet type inequality for maxima of empirical processes)
- Uniform Central Limit Theorems
- Weak convergence and empirical processes. With applications to statistics
Cited in
(only showing first 100 items - show all)- scientific article; zbMATH DE number 1552503 (Why is no real title available?)
- Obtaining fast error rates in nonconvex situations
- A Statistical Learning Approach to Modal Regression
- Surrogate losses in passive and active learning
- Asset pricing with neural networks: significance tests
- Minimax nonparametric multi-sample test under smoothing
- Complexity of pattern classes and the Lipschitz property
- Robust statistical learning with Lipschitz and convex loss functions
- Convergence of online pairwise regression learning with quadratic loss
- Refined generalization bounds of gradient learning over reproducing kernel Hilbert spaces
- Nonparametric distributed learning under general designs
- On the uniform convergence of empirical norms and inner products, with application to causal inference
- Learning with convex loss and indefinite kernels
- ``Local vs. ``global parameters -- breaking the Gaussian complexity barrier
- Mean estimation and regression under heavy-tailed distributions: A survey
- Nearest neighbor empirical processes
- Transductive Rademacher complexity and its applications
- Rademacher complexity in Neyman-Pearson classification
- Selective Rademacher penalization and reduced error pruning of decision trees
- Fast rates of minimum error entropy with heavy-tailed noise
- Estimation of partially conditional average treatment effect by double kernel-covariate balancing
- U-Processes and Preference Learning
- Monte Carlo algorithms for optimal stopping and statistical learning
- Fast generalization error bound of deep learning without scale invariance of activation functions
- Statistical properties of kernel principal component analysis
- Online Linear Programming: Dual Convergence, New Algorithms, and Regret Bounds
- scientific article; zbMATH DE number 7255125 (Why is no real title available?)
- Concentration estimates for learning with unbounded sampling
- Rademacher Chaos Complexities for Learning the Kernel Problem
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- Oracle inequalities for support vector machines that are based on random entropy numbers
- scientific article; zbMATH DE number 7306868 (Why is no real title available?)
- Approximation error bounds via Rademacher's complexity
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples
- Optimal dyadic decision trees
- Inverse statistical learning
- scientific article; zbMATH DE number 7625155 (Why is no real title available?)
- Improving the sample complexity using global data
- Minimax fast rates for discriminant analysis with errors in variables
- Solving PDEs on spheres with physics-informed convolutional neural networks
- Regularized learning schemes in feature Banach spaces
- A moment-matching approach to testable learning and a new characterization of Rademacher complexity
- Learning without concentration
- Approximation by neural networks and learning theory
- Deep learning: a statistical viewpoint
- Comment
- On mean estimation for heteroscedastic random variables
- Refined Rademacher chaos complexity bounds with applications to the multikernel learning problem
- FAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTION
- Distributed learning for sketched kernel regression
- Smooth Contextual Bandits: Bridging the Parametric and Nondifferentiable Regret Regimes
- Statistical performance of support vector machines
- Permutational Rademacher Complexity
- Optimal model selection in heteroscedastic regression using piecewise polynomial functions
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- Nonasymptotic upper bounds for the reconstruction error of PCA
- Fast rates by transferring from auxiliary hypotheses
- Direct importance estimation for covariate shift adaptation
- Nonasymptotic analysis of robust regression with modified Huber's loss
- When are epsilon-nets small?
- Learning without concentration for general loss functions
- Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions
- Complexities of convex combinations and bounding the generalization error in classification
- 10.1162/153244302760200650
- On nonparametric classification with missing covariates
- Sparse additive support vector machines in bounded variation space
- From Gauss to Kolmogorov: localized measures of complexity for ellipses
- The geometry of hypothesis testing over convex cones: generalized likelihood ratio tests and minimax radii
- Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learning
- Random subclass bounds.
- Optimal convergence rate of the universal estimation error
- Asymptotic sequential Rademacher complexity of a finite function class
- On the optimality of sample-based estimates of the expectation of the empirical minimizer
- Sparsity in penalized empirical risk minimization
- On the importance of small coordinate projections
- Regularization in kernel learning
- Robust supervised learning with coordinate gradient descent
- Variance-based regularization with convex objectives
- 10.1162/153244303321897690
- Low-Rank Covariance Function Estimation for Multidimensional Functional Data
- Fast learning from \(\alpha\)-mixing observations
- Model selection by bootstrap penalization for classification
- Convolutional spectral kernel learning with generalization guarantees
- Theory of Classification: a Survey of Some Recent Advances
- Concentration inequalities for samples without replacement
- Learning models with uniform performance via distributionally robust optimization
- A tight upper bound on the generalization error of feedforward neural networks
- Full error analysis for the training of deep neural networks
- Online pairwise learning algorithms with convex loss functions
- Online regularized learning with pairwise loss functions
- Bayesian fractional posteriors
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Rademacher penalties and structural risk minimization
- Rademacher averages and phase transitions in Glivenko-Cantelli classes
- Statistics of robust optimization: a generalized empirical likelihood approach
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
- Guaranteed Functional Tensor Singular Value Decomposition
- Calibration of \(\epsilon\)-insensitive loss in support vector machines regression
- Penalized empirical risk minimization over Besov spaces
This page was built for publication: Local Rademacher complexities
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2583411)