The following pages link to Local Rademacher complexities (Q2583411):
Displaying 50 items.
- Direct importance estimation for covariate shift adaptation (Q144623) (← links)
- On the optimal estimation of probability measures in weak and strong topologies (Q282569) (← links)
- Consistency analysis of an empirical minimum error entropy algorithm (Q285539) (← links)
- Tikhonov, Ivanov and Morozov regularization for support vector machine learning (Q285946) (← links)
- Smooth sparse coding via marginal regression for learning sparse representations (Q309913) (← links)
- Inverse statistical learning (Q364201) (← links)
- Model selection in reinforcement learning (Q415618) (← links)
- Robustness and generalization (Q420915) (← links)
- Margin-adaptive model selection in statistical learning (Q453298) (← links)
- An improved analysis of the Rademacher data-dependent bound using its self bounding property (Q459446) (← links)
- Optimal convergence rate of the universal estimation error (Q516004) (← links)
- Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions (Q526680) (← links)
- Monte Carlo algorithms for optimal stopping and statistical learning (Q558680) (← links)
- Estimating conditional quantiles with the help of the pinball loss (Q637098) (← links)
- The geometry of hypothesis testing over convex cones: generalized likelihood ratio tests and minimax radii (Q666588) (← links)
- Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learning (Q669277) (← links)
- Approximation properties of certain operator-induced norms on Hilbert spaces (Q765689) (← links)
- Learning models with uniform performance via distributionally robust optimization (Q820804) (← links)
- A reproducing kernel Hilbert space approach to high dimensional partially varying coefficient model (Q830540) (← links)
- Sparsity in penalized empirical risk minimization (Q838303) (← links)
- Regularization in kernel learning (Q847647) (← links)
- Multi-kernel regularized classifiers (Q870343) (← links)
- Bootstrap model selection for possibly dependent and heterogeneous data (Q904102) (← links)
- Using the doubling dimension to analyze the generalization of learning algorithms (Q923877) (← links)
- Obtaining fast error rates in nonconvex situations (Q933417) (← links)
- Fast rates for support vector machines using Gaussian kernels (Q995417) (← links)
- Convergence rates of generalization errors for margin-based classification (Q1021988) (← links)
- Rademacher complexity in Neyman-Pearson classification (Q1034311) (← links)
- Learning without concentration for general loss functions (Q1647935) (← links)
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies (Q1657947) (← links)
- Localization of VC classes: beyond local Rademacher complexities (Q1663641) (← links)
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples (Q1669081) (← links)
- Calibration of \(\epsilon\)-insensitive loss in support vector machines regression (Q1730072) (← links)
- Bayesian fractional posteriors (Q1731743) (← links)
- Robust multicategory support vector machines using difference convex algorithm (Q1749454) (← links)
- Oracle inequalities for sparse additive quantile regression in reproducing kernel Hilbert space (Q1750287) (← links)
- Concentration estimates for learning with unbounded sampling (Q1946480) (← links)
- On the empirical estimation of integral probability metrics (Q1950872) (← links)
- Optimal model selection in heteroscedastic regression using piecewise polynomial functions (Q1951154) (← links)
- Model selection by resampling penalization (Q1951992) (← links)
- Penalized empirical risk minimization over Besov spaces (Q1952004) (← links)
- Transfer bounds for linear feature learning (Q1959489) (← links)
- Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces (Q1979424) (← links)
- A tight upper bound on the generalization error of feedforward neural networks (Q1982395) (← links)
- Singularity, misspecification and the convergence rate of EM (Q1996764) (← links)
- Surrogate losses in passive and active learning (Q2008623) (← links)
- Fast generalization error bound of deep learning without scale invariance of activation functions (Q2055056) (← links)
- An elementary analysis of ridge regression with random design (Q2080945) (← links)
- Convolutional spectral kernel learning with generalization guarantees (Q2093403) (← links)
- Suboptimality of constrained least squares and improvements via non-linear predictors (Q2108490) (← links)