Multi-kernel regularized classifiers
From MaRDI portal
Publication:870343
DOI10.1016/j.jco.2006.06.007zbMath1171.65043OpenAlexW2083575259WikidataQ58759019 ScholiaQ58759019MaRDI QIDQ870343
Yiming Ying, Qiang Wu, Ding-Xuan Zhou
Publication date: 12 March 2007
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jco.2006.06.007
convergence ratesTikhonov regularizationclassification algorithmmisclassification errorregularization errorsample errorconvex loss functionmulti-kernel regularization scheme
Related Items (91)
Optimality of the rescaled pure greedy learning algorithms ⋮ Machine learning with kernels for portfolio valuation and risk management ⋮ Error analysis for \(l^q\)-coefficient regularized moving least-square regression ⋮ Statistical consistency of coefficient-based conditional quantile regression ⋮ A Statistical Learning Approach to Modal Regression ⋮ The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary ⋮ ERM learning algorithm for multi-class classification ⋮ Fully online classification by regularization ⋮ Feature space perspectives for learning the kernel ⋮ Learning with sample dependent hypothesis spaces ⋮ Error bounds of multi-graph regularized semi-supervised classification ⋮ Learning rates of kernel-based robust classification ⋮ Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies ⋮ An efficient kernel learning algorithm for semisupervised regression problems ⋮ Kernel-based sparse regression with the correntropy-induced loss ⋮ The learning rate of \(l_2\)-coefficient regularized classification with strong loss ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Modal additive models with data-driven structure identification ⋮ Averaging versus voting: a comparative study of strategies for distributed classification ⋮ Optimal learning rates for least squares regularized regression with unbounded sampling ⋮ Least square regression with indefinite kernels and coefficient regularization ⋮ Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping ⋮ Convergence of online pairwise regression learning with quadratic loss ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Kernel-based maximum correntropy criterion with gradient descent method ⋮ The convergence rate of semi-supervised regression with quadratic loss ⋮ On the convergence rate of kernel-based sequential greedy regression ⋮ On the K-functional in learning theory ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss ⋮ Unnamed Item ⋮ Learning rates for regularized least squares ranking algorithm ⋮ ERM learning with unbounded sampling ⋮ Error analysis for coefficient-based regularized regression in additive models ⋮ Concentration estimates for learning with unbounded sampling ⋮ Conditional quantiles with varying Gaussians ⋮ Approximation properties of mixed sampling-Kantorovich operators ⋮ Error bounds for \(l^p\)-norm multiple kernel learning with least square loss ⋮ Online learning for quantile regression and support vector regression ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem ⋮ Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮ Convergence rate of the semi-supervised greedy algorithm ⋮ A Note on Support Vector Machines with Polynomial Kernels ⋮ Generalization Analysis of Fredholm Kernel Regularized Classifiers ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ Unified approach to coefficient-based regularized regression ⋮ Classification with non-i.i.d. sampling ⋮ A simpler approach to coefficient regularized support vector machines regression ⋮ Constructive analysis for least squares regression with generalized \(K\)-norm regularization ⋮ Convergence rate of SVM for kernel-based robust regression ⋮ A new comparison theorem on conditional quantiles ⋮ Calibration of \(\epsilon\)-insensitive loss in support vector machines regression ⋮ Convergence analysis of online algorithms ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Unregularized online learning algorithms with general loss functions ⋮ Learning performance of regularized regression with multiscale kernels based on Markov observations ⋮ Classification with polynomial kernels and \(l^1\)-coefficient regularization ⋮ Learning rates for regularized classifiers using multivariate polynomial kernels ⋮ Learning and approximation by Gaussians on Riemannian manifolds ⋮ Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions ⋮ The convergence rate for a \(K\)-functional in learning theory ⋮ Logistic classification with varying gaussians ⋮ Learning the coordinate gradients ⋮ Learning rates for multi-kernel linear programming classifiers ⋮ Learning from non-identical sampling for classification ⋮ Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions ⋮ Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces ⋮ Learning rates of multi-kernel regularized regression ⋮ Learning errors of linear programming support vector regression ⋮ Concentration estimates for the moving least-square method in learning theory ⋮ Online chaotic time series prediction using unbiased composite kernel machine via Cholesky factorization ⋮ Rademacher Chaos Complexities for Learning the Kernel Problem ⋮ Convergence of online mirror descent ⋮ Statistical analysis of the moving least-squares method with unbounded sampling ⋮ Unregularized online algorithms with varying Gaussians ⋮ Oracle inequalities for support vector machines that are based on random entropy numbers ⋮ Regularized modal regression with data-dependent hypothesis spaces ⋮ A spectral series approach to high-dimensional nonparametric regression ⋮ Error analysis of multicategory support vector machine classifiers ⋮ Learning with correntropy-induced losses for regression with mixture of symmetric stable noise ⋮ Moving quantile regression ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS ⋮ Modeling interactive components by coordinate kernel polynomial models ⋮ Error bounds for learning the kernel ⋮ Online regularized generalized gradient classification algorithms ⋮ Debiased magnitude-preserving ranking: learning rate and bias characterization ⋮ Online Classification with Varying Gaussians ⋮ Regularization schemes for minimum error entropy principle ⋮ Learning rates for partially linear support vector machine in high dimensions ⋮ CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- The covering number in learning theory
- Support vector machines are universally consistent
- On the Bayes-risk consistency of regularized boosting methods.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- Support-vector networks
- Weak convergence and empirical processes. With applications to statistics
- Regularization networks and support vector machines
- Statistical performance of support vector machines
- Local Rademacher complexities
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- 10.1162/153244302760185252
- Risk bounds for mixture density estimation
- Theory of Classification: a Survey of Some Recent Advances
- Learning Theory
- Capacity of reproducing kernel spaces in learning theory
- Efficient agnostic learning of neural networks with bounded fan-in
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Chaos control using least-squares support vector machines
- Improving the sample complexity using global data
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Structural risk minimization over data-dependent hierarchies
- 10.1162/153244302760200704
- Shannon sampling and function reconstruction from point values
- 10.1162/1532443041424319
- Learning Theory
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
- Some applications of concentration inequalities to statistics
- On the dual formulation of regularized linear systems with convex risks
- Choosing multiple parameters for support vector machines
This page was built for publication: Multi-kernel regularized classifiers