On the robustness of regularized pairwise learning methods based on kernels
From MaRDI portal
Abstract: Regularized empirical risk minimization including support vector machines plays an important role in machine learning theory. In this paper regularized pairwise learning (RPL) methods based on kernels will be investigated. One example is regularized minimization of the error entropy loss which has recently attracted quite some interest from the viewpoint of consistency and learning rates. This paper shows that such RPL methods have additionally good statistical robustness properties, if the loss function and the kernel are chosen appropriately. We treat two cases of particular interest: (i) a bounded and non-convex loss function and (ii) an unbounded convex loss function satisfying a certain Lipschitz type condition.
Recommendations
- On robustness properties of convex risk minimization methods for pattern recognition
- scientific article; zbMATH DE number 7295804
- Consistency and robustness of kernel-based regression in convex risk minimization
- Robustness and regularization of support vector machines
- Learning rates of kernel-based robust classification
Cites work
- scientific article; zbMATH DE number 1713116 (Why is no real title available?)
- scientific article; zbMATH DE number 3901506 (Why is no real title available?)
- scientific article; zbMATH DE number 3954047 (Why is no real title available?)
- scientific article; zbMATH DE number 3658755 (Why is no real title available?)
- scientific article; zbMATH DE number 3782216 (Why is no real title available?)
- scientific article; zbMATH DE number 3637090 (Why is no real title available?)
- scientific article; zbMATH DE number 491591 (Why is no real title available?)
- scientific article; zbMATH DE number 1972340 (Why is no real title available?)
- scientific article; zbMATH DE number 1476625 (Why is no real title available?)
- scientific article; zbMATH DE number 1376935 (Why is no real title available?)
- scientific article; zbMATH DE number 1391397 (Why is no real title available?)
- scientific article; zbMATH DE number 3274494 (Why is no real title available?)
- scientific article; zbMATH DE number 3320765 (Why is no real title available?)
- scientific article; zbMATH DE number 3336465 (Why is no real title available?)
- A General Qualitative Definition of Robustness
- Bootstrap methods: another look at the jackknife
- Compactly supported positive definite radial functions
- Consistency analysis of an empirical minimum error entropy algorithm
- Consistency and robustness of kernel-based regression in convex risk minimization
- Consistency of support vector machines using additive kernels for additive models
- Convex analysis and measurable multifunctions
- Distinguishability of Sets of Distributions
- Generalization bounds for ranking algorithms via algorithmic stability
- Information theoretic learning. Renyi's entropy and kernel perspectives
- Learning Theory
- Learning coordinate covariances via gradients
- Learning rates for the risk of kernel-based quantile regression estimators in additive models
- Learning theory approach to minimum error entropy criterion
- Learning with the maximum correntropy criterion induced losses for regression
- On McDiarmid's concentration inequality
- On consistency and robustness properties of support vector machines for heavy-tailed distributions
- On qualitative robustness of support vector machines
- On robustness properties of bootstrap approximations
- On robustness properties of convex risk minimization methods for pattern recognition
- Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree
- Qualitative robustness in abstract inference
- Ranking and empirical minimization of \(U\)-statistics
- Real Analysis and Probability
- Regularization schemes for minimum error entropy principle
- Robust Statistics
- Support Vector Machines
- The Influence Curve and Its Role in Robust Estimation
Cited in
(12)- Online pairwise learning algorithms with convex loss functions
- Distributed minimum error entropy algorithms
- On extension theorems and their connection to universal consistency in machine learning
- Stability and optimization error of stochastic gradient descent for pairwise learning
- Pairwise learning problems with regularization networks and Nyström subsampling approach
- Convergence of online pairwise regression learning with quadratic loss
- Error analysis of kernel regularized pairwise learning with a strongly convex loss
- Robust pairwise learning with Huber loss
- Consistency and robustness of kernel-based regression in convex risk minimization
- A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression
- scientific article; zbMATH DE number 7295804 (Why is no real title available?)
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
This page was built for publication: On the robustness of regularized pairwise learning methods based on kernels
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q325147)