Local Rademacher complexities
DOI10.1214/009053605000000282zbMATH Open1083.62034arXivmath/0508275OpenAlexW3100743579WikidataQ105584239 ScholiaQ105584239MaRDI QIDQ2583411FDOQ2583411
Authors: Olivier Bousquet, Shahar Mendelson, Peter L. Bartlett
Publication date: 16 January 2006
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0508275
Recommendations
Nonparametric regression and quantile regression (62G08) Complexity and performance of numerical algorithms (65Y20) Analysis of algorithms and problem complexity (68Q25) Computational learning theory (68Q32)
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Asymptotic Statistics
- Title not available (Why is that?)
- Convergence of stochastic processes
- Title not available (Why is that?)
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Title not available (Why is that?)
- Sharper bounds for Gaussian and empirical processes
- Smooth discrimination analysis
- A Bennett concentration inequality and its application to suprema of empirical processes
- A distribution-free theory of nonparametric regression
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Uniform Central Limit Theorems
- Improving the sample complexity using global data
- Une inégalité de Bennett pour les maxima de processus empiriques. (A Bennet type inequality for maxima of empirical processes)
- Some applications of concentration inequalities to statistics
- 10.1162/153244303321897690
- Convexity, Classification, and Risk Bounds
- Concentration inequalities using the entropy method
- Title not available (Why is that?)
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- Title not available (Why is that?)
- Rademacher penalties and structural risk minimization
- Rademacher averages and phase transitions in Glivenko-Cantelli classes
- The importance of convexity in learning with squared loss
- Title not available (Why is that?)
- Title not available (Why is that?)
- Empirical minimization
- Model selection and error estimation
- A sharp concentration inequality with applications
- Complexity regularization via localized random penalties
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- A new approach to least-squares estimation, with applications
- Title not available (Why is that?)
- Advanced lectures on machine learning. Machine learning summer school 2002, Canberra, Australia, February 11--22, 2002. Revised lectures
Cited In (only showing first 100 items - show all)
- Learning models with uniform performance via distributionally robust optimization
- A tight upper bound on the generalization error of feedforward neural networks
- Online pairwise learning algorithms with convex loss functions
- Statistics of robust optimization: a generalized empirical likelihood approach
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
- Calibration of \(\epsilon\)-insensitive loss in support vector machines regression
- PAC-learning with approximate predictors
- Suboptimality of constrained least squares and improvements via non-linear predictors
- Non-convex projected gradient descent for generalized low-rank tensor regression
- Title not available (Why is that?)
- Title not available (Why is that?)
- Generalization bounds for non-stationary mixing processes
- Metamodel construction for sensitivity analysis
- Convergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesics
- Multi-kernel learning for multi-label classification with local Rademacher complexity
- Kernelized elastic net regularization: generalization bounds, and sparse recovery
- Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations
- Title not available (Why is that?)
- Handling concept drift via model reuse
- Robust multicategory support vector machines using difference convex algorithm
- Influence diagnostics in support vector machines
- Distribution-dependent sample complexity of large margin learning
- ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Title not available (Why is that?)
- Regularization and the small-ball method. II: Complexity dependent error rates
- Distribution-free robust linear regression
- Empirical variance minimization with applications in variance reduction and optimal control
- Title not available (Why is that?)
- Rademacher complexity for Markov chains: applications to kernel smoothing and Metropolis-Hastings
- Compressive statistical learning with random feature moments
- Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- A unified penalized method for sparse additive quantile models: an RKHS approach
- Noisy discriminant analysis with boundary assumptions
- Surrogate losses in passive and active learning
- Learning with convex loss and indefinite kernels
- Robust statistical learning with Lipschitz and convex loss functions
- Convergence of online pairwise regression learning with quadratic loss
- Nonparametric distributed learning under general designs
- ``Local vs. ``global parameters -- breaking the Gaussian complexity barrier
- Transductive Rademacher complexity and its applications
- Mean estimation and regression under heavy-tailed distributions: A survey
- Selective Rademacher penalization and reduced error pruning of decision trees
- Online Linear Programming: Dual Convergence, New Algorithms, and Regret Bounds
- Rademacher complexity in Neyman-Pearson classification
- Fast generalization error bound of deep learning without scale invariance of activation functions
- Title not available (Why is that?)
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- Title not available (Why is that?)
- Deep learning: a statistical viewpoint
- Distributed learning for sketched kernel regression
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- Permutational Rademacher Complexity
- Optimal model selection in heteroscedastic regression using piecewise polynomial functions
- Nonasymptotic upper bounds for the reconstruction error of PCA
- Variance-based regularization with convex objectives
- Fast learning from \(\alpha\)-mixing observations
- Model selection by bootstrap penalization for classification
- Convolutional spectral kernel learning with generalization guarantees
- Concentration inequalities for samples without replacement
- Full error analysis for the training of deep neural networks
- Online regularized learning with pairwise loss functions
- Bayesian fractional posteriors
- Rademacher penalties and structural risk minimization
- Rademacher averages and phase transitions in Glivenko-Cantelli classes
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- Penalized empirical risk minimization over Besov spaces
- On the empirical estimation of integral probability metrics
- Local learning estimates by integral operators
- Estimating individualized treatment rules using outcome weighted learning
- Estimates of the approximation error using Rademacher complexity: Learning vector-valued functions
- Combinatorial bounds for learning performance
- Model selection in reinforcement learning
- Robustness and generalization
- Transfer bounds for linear feature learning
- On the optimal estimation of probability measures in weak and strong topologies
- Consistency analysis of an empirical minimum error entropy algorithm
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning
- VC dimension, fat-shattering dimension, Rademacher averages, and their applications
- Bootstrap model selection for possibly dependent and heterogeneous data
- Model selection by resampling penalization
- An improved analysis of the Rademacher data-dependent bound using its self bounding property
- Empirical minimization
- Title not available (Why is that?)
- Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces
- Singularity, misspecification and the convergence rate of EM
- Convergence rates of least squares regression estimators with heavy-tailed errors
- Comments on: Support vector machines maximizing geometric margins for multi-class classification
- Using the doubling dimension to analyze the generalization of learning algorithms
- Margin-adaptive model selection in statistical learning
- Convergence rates of generalization errors for margin-based classification
- Title not available (Why is that?)
- Boosting with early stopping: convergence and consistency
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- A local Vapnik-Chervonenkis complexity
- Estimating conditional quantiles with the help of the pinball loss
- Smooth sparse coding via marginal regression for learning sparse representations
This page was built for publication: Local Rademacher complexities
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2583411)