Local Rademacher complexities
DOI10.1214/009053605000000282zbMATH Open1083.62034arXivmath/0508275OpenAlexW3100743579WikidataQ105584239 ScholiaQ105584239MaRDI QIDQ2583411FDOQ2583411
Authors: Olivier Bousquet, Shahar Mendelson, Peter L. Bartlett
Publication date: 16 January 2006
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0508275
Recommendations
Nonparametric regression and quantile regression (62G08) Complexity and performance of numerical algorithms (65Y20) Analysis of algorithms and problem complexity (68Q25) Computational learning theory (68Q32)
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Asymptotic Statistics
- Title not available (Why is that?)
- Convergence of stochastic processes
- Title not available (Why is that?)
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Title not available (Why is that?)
- Sharper bounds for Gaussian and empirical processes
- Smooth discrimination analysis
- A Bennett concentration inequality and its application to suprema of empirical processes
- A distribution-free theory of nonparametric regression
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Uniform Central Limit Theorems
- Improving the sample complexity using global data
- Une inégalité de Bennett pour les maxima de processus empiriques. (A Bennet type inequality for maxima of empirical processes)
- Some applications of concentration inequalities to statistics
- 10.1162/153244303321897690
- Convexity, Classification, and Risk Bounds
- Concentration inequalities using the entropy method
- Title not available (Why is that?)
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- Title not available (Why is that?)
- Rademacher penalties and structural risk minimization
- Rademacher averages and phase transitions in Glivenko-Cantelli classes
- The importance of convexity in learning with squared loss
- Title not available (Why is that?)
- Title not available (Why is that?)
- Empirical minimization
- Model selection and error estimation
- A sharp concentration inequality with applications
- Complexity regularization via localized random penalties
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- A new approach to least-squares estimation, with applications
- Title not available (Why is that?)
- Advanced lectures on machine learning. Machine learning summer school 2002, Canberra, Australia, February 11--22, 2002. Revised lectures
Cited In (only showing first 100 items - show all)
- Online regularized learning with pairwise loss functions
- Bayesian fractional posteriors
- Rademacher penalties and structural risk minimization
- Rademacher averages and phase transitions in Glivenko-Cantelli classes
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- Penalized empirical risk minimization over Besov spaces
- On the empirical estimation of integral probability metrics
- Local learning estimates by integral operators
- Estimating individualized treatment rules using outcome weighted learning
- Estimates of the approximation error using Rademacher complexity: Learning vector-valued functions
- Combinatorial bounds for learning performance
- Model selection in reinforcement learning
- Robustness and generalization
- Transfer bounds for linear feature learning
- On the optimal estimation of probability measures in weak and strong topologies
- Consistency analysis of an empirical minimum error entropy algorithm
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- VC dimension, fat-shattering dimension, Rademacher averages, and their applications
- Bootstrap model selection for possibly dependent and heterogeneous data
- Model selection by resampling penalization
- An improved analysis of the Rademacher data-dependent bound using its self bounding property
- Empirical minimization
- Title not available (Why is that?)
- Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces
- Singularity, misspecification and the convergence rate of EM
- Convergence rates of least squares regression estimators with heavy-tailed errors
- Comments on: Support vector machines maximizing geometric margins for multi-class classification
- Using the doubling dimension to analyze the generalization of learning algorithms
- Margin-adaptive model selection in statistical learning
- Convergence rates of generalization errors for margin-based classification
- Title not available (Why is that?)
- Boosting with early stopping: convergence and consistency
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- A local Vapnik-Chervonenkis complexity
- Estimating conditional quantiles with the help of the pinball loss
- Smooth sparse coding via marginal regression for learning sparse representations
- An elementary analysis of ridge regression with random design
- Multi-kernel regularized classifiers
- Fast rates for support vector machines using Gaussian kernels
- Title not available (Why is that?)
- Complexity regularization via localized random penalties
- Approximation properties of certain operator-induced norms on Hilbert spaces
- A Statistical Learning Approach to Modal Regression
- Obtaining fast error rates in nonconvex situations
- Refined generalization bounds of gradient learning over reproducing kernel Hilbert spaces
- On the uniform convergence of empirical norms and inner products, with application to causal inference
- U-Processes and Preference Learning
- Statistical properties of kernel principal component analysis
- Monte Carlo algorithms for optimal stopping and statistical learning
- Rademacher Chaos Complexities for Learning the Kernel Problem
- Concentration estimates for learning with unbounded sampling
- Oracle inequalities for support vector machines that are based on random entropy numbers
- Approximation error bounds via Rademacher's complexity
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples
- Optimal dyadic decision trees
- Improving the sample complexity using global data
- Inverse statistical learning
- Regularized learning schemes in feature Banach spaces
- Minimax fast rates for discriminant analysis with errors in variables
- Learning without concentration
- Approximation by neural networks and learning theory
- FAST RATES FOR ESTIMATION ERROR AND ORACLE INEQUALITIES FOR MODEL SELECTION
- Statistical performance of support vector machines
- 10.1162/153244302760200650
- Complexities of convex combinations and bounding the generalization error in classification
- Learning without concentration for general loss functions
- Direct importance estimation for covariate shift adaptation
- On nonparametric classification with missing covariates
- Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions
- The geometry of hypothesis testing over convex cones: generalized likelihood ratio tests and minimax radii
- Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learning
- Optimal convergence rate of the universal estimation error
- Asymptotic sequential Rademacher complexity of a finite function class
- Sparsity in penalized empirical risk minimization
- Regularization in kernel learning
- 10.1162/153244303321897690
- Theory of Classification: a Survey of Some Recent Advances
- Learning models with uniform performance via distributionally robust optimization
- A tight upper bound on the generalization error of feedforward neural networks
- Online pairwise learning algorithms with convex loss functions
- Statistics of robust optimization: a generalized empirical likelihood approach
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
- Calibration of \(\epsilon\)-insensitive loss in support vector machines regression
- PAC-learning with approximate predictors
- Suboptimality of constrained least squares and improvements via non-linear predictors
- Non-convex projected gradient descent for generalized low-rank tensor regression
- Title not available (Why is that?)
- Title not available (Why is that?)
- Generalization bounds for non-stationary mixing processes
- Metamodel construction for sensitivity analysis
- Convergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesics
- Multi-kernel learning for multi-label classification with local Rademacher complexity
- Kernelized elastic net regularization: generalization bounds, and sparse recovery
- Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations
- Title not available (Why is that?)
- Handling concept drift via model reuse
- Robust multicategory support vector machines using difference convex algorithm
- Influence diagnostics in support vector machines
This page was built for publication: Local Rademacher complexities
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2583411)