Local Rademacher complexities
DOI10.1214/009053605000000282zbMATH Open1083.62034arXivmath/0508275OpenAlexW3100743579WikidataQ105584239 ScholiaQ105584239MaRDI QIDQ2583411FDOQ2583411
Authors: Olivier Bousquet, Shahar Mendelson, Peter L. Bartlett
Publication date: 16 January 2006
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/math/0508275
Recommendations
Nonparametric regression and quantile regression (62G08) Complexity and performance of numerical algorithms (65Y20) Analysis of algorithms and problem complexity (68Q25) Computational learning theory (68Q32)
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Asymptotic Statistics
- Title not available (Why is that?)
- Convergence of stochastic processes
- Title not available (Why is that?)
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Title not available (Why is that?)
- Sharper bounds for Gaussian and empirical processes
- Smooth discrimination analysis
- A Bennett concentration inequality and its application to suprema of empirical processes
- A distribution-free theory of nonparametric regression
- About the constants in Talagrand's concentration inequalities for empirical processes.
- Uniform Central Limit Theorems
- Improving the sample complexity using global data
- Une inégalité de Bennett pour les maxima de processus empiriques. (A Bennet type inequality for maxima of empirical processes)
- Some applications of concentration inequalities to statistics
- 10.1162/153244303321897690
- Convexity, Classification, and Risk Bounds
- Concentration inequalities using the entropy method
- Title not available (Why is that?)
- Sphere packing numbers for subsets of the Boolean \(n\)-cube with bounded Vapnik-Chervonenkis dimension
- Title not available (Why is that?)
- Rademacher penalties and structural risk minimization
- Rademacher averages and phase transitions in Glivenko-Cantelli classes
- The importance of convexity in learning with squared loss
- Title not available (Why is that?)
- Title not available (Why is that?)
- Empirical minimization
- Model selection and error estimation
- A sharp concentration inequality with applications
- Complexity regularization via localized random penalties
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- A new approach to least-squares estimation, with applications
- Title not available (Why is that?)
- Advanced lectures on machine learning. Machine learning summer school 2002, Canberra, Australia, February 11--22, 2002. Revised lectures
Cited In (only showing first 100 items - show all)
- Learning models with uniform performance via distributionally robust optimization
- A tight upper bound on the generalization error of feedforward neural networks
- Online pairwise learning algorithms with convex loss functions
- Statistics of robust optimization: a generalized empirical likelihood approach
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model
- Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation
- Calibration of \(\epsilon\)-insensitive loss in support vector machines regression
- PAC-learning with approximate predictors
- Suboptimality of constrained least squares and improvements via non-linear predictors
- Non-convex projected gradient descent for generalized low-rank tensor regression
- Title not available (Why is that?)
- Title not available (Why is that?)
- Generalization bounds for non-stationary mixing processes
- Metamodel construction for sensitivity analysis
- Convergence rates for empirical barycenters in metric spaces: curvature, convexity and extendable geodesics
- Multi-kernel learning for multi-label classification with local Rademacher complexity
- Kernelized elastic net regularization: generalization bounds, and sparse recovery
- Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black-Scholes partial differential equations
- Title not available (Why is that?)
- Handling concept drift via model reuse
- Robust multicategory support vector machines using difference convex algorithm
- Influence diagnostics in support vector machines
- Distribution-dependent sample complexity of large margin learning
- ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels
- Estimation bounds and sharp oracle inequalities of regularized procedures with Lipschitz loss functions
- Title not available (Why is that?)
- Regularization and the small-ball method. II: Complexity dependent error rates
- Distribution-free robust linear regression
- Empirical variance minimization with applications in variance reduction and optimal control
- Title not available (Why is that?)
- Rademacher complexity for Markov chains: applications to kernel smoothing and Metropolis-Hastings
- Compressive statistical learning with random feature moments
- Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- A unified penalized method for sparse additive quantile models: an RKHS approach
- Noisy discriminant analysis with boundary assumptions
- Surrogate losses in passive and active learning
- Learning with convex loss and indefinite kernels
- Robust statistical learning with Lipschitz and convex loss functions
- Convergence of online pairwise regression learning with quadratic loss
- Nonparametric distributed learning under general designs
- ``Local vs. ``global parameters -- breaking the Gaussian complexity barrier
- Transductive Rademacher complexity and its applications
- Mean estimation and regression under heavy-tailed distributions: A survey
- Selective Rademacher penalization and reduced error pruning of decision trees
- Online Linear Programming: Dual Convergence, New Algorithms, and Regret Bounds
- Rademacher complexity in Neyman-Pearson classification
- Fast generalization error bound of deep learning without scale invariance of activation functions
- Title not available (Why is that?)
- Fast rates for general unbounded loss functions: from ERM to generalized Bayes
- Title not available (Why is that?)
- Deep learning: a statistical viewpoint
- Distributed learning for sketched kernel regression
- Average stability is invariant to data preconditioning. Implications to exp-concave empirical risk minimization
- Permutational Rademacher Complexity
- Optimal model selection in heteroscedastic regression using piecewise polynomial functions
- Nonasymptotic upper bounds for the reconstruction error of PCA
- Variance-based regularization with convex objectives
- Fast learning from \(\alpha\)-mixing observations
- Model selection by bootstrap penalization for classification
- Convolutional spectral kernel learning with generalization guarantees
- Concentration inequalities for samples without replacement
- Full error analysis for the training of deep neural networks
- Guaranteed Functional Tensor Singular Value Decomposition
- Orthogonal statistical learning
- Rademacher Margin Complexity
- Sample average approximation with heavier tails. I: Non-asymptotic bounds with weak assumptions and stochastic constraints
- Learning rates for partially linear support vector machine in high dimensions
- Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso
- Local Rademacher complexity-based learning guarantees for multi-task learning
- An error analysis for deep binary classification with sigmoid loss
- Graphical convergence of subgradients in nonconvex optimization and learning
- Deep nonlinear sufficient dimension reduction
- Locally simultaneous inference
- Improvement of multiple kernel learning using adaptively weighted regularization
- Data-adaptive discriminative feature localization with statistically guaranteed interpretation
- Minimax rates for conditional density estimation via empirical entropy
- Strong overall error analysis for the training of artificial neural networks via random initializations
- Researches on Rademacher complexities in statistical learning theory: a survey
- Benign Overfitting and Noisy Features
- Continuity of Performance Metrics for Thin Feature Maps
- Rademacher complexity and grammar induction algorithms: what it may (not) tell us
- Complexity control in statistical learning
- Asset pricing with neural networks: significance tests
- Minimax nonparametric multi-sample test under smoothing
- Complexity of pattern classes and the Lipschitz property
- Nearest neighbor empirical processes
- Fast rates of minimum error entropy with heavy-tailed noise
- Estimation of partially conditional average treatment effect by double kernel-covariate balancing
- Title not available (Why is that?)
- Solving PDEs on spheres with physics-informed convolutional neural networks
- A moment-matching approach to testable learning and a new characterization of Rademacher complexity
- Comment
- Refined Rademacher chaos complexity bounds with applications to the multikernel learning problem
- On mean estimation for heteroscedastic random variables
- Smooth Contextual Bandits: Bridging the Parametric and Nondifferentiable Regret Regimes
- Fast rates by transferring from auxiliary hypotheses
- Sparse additive support vector machines in bounded variation space
- Nonasymptotic analysis of robust regression with modified Huber's loss
- When are epsilon-nets small?
This page was built for publication: Local Rademacher complexities
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2583411)