scientific article
From MaRDI portal
Publication:3093220
zbMath1222.68167MaRDI QIDQ3093220
Qiang Wu, Ding-Xuan Zhou, Yiming Ying, Di-Rong Chen
Publication date: 12 October 2011
Full work available at URL: http://www.jmlr.org/papers/v5/chen04b.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
misclassification errorapproximation errorregularization errorsupport vector machine classification\(q\)-norm soft margin classifier
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (only showing first 100 items - show all)
Learning rates for partially linear support vector machine in high dimensions ⋮ Generalization performance of Lagrangian support vector machine based on Markov sampling ⋮ Statistical consistency of coefficient-based conditional quantile regression ⋮ Fully online classification by regularization ⋮ Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications ⋮ Minimax optimal rates of convergence for multicategory classifications ⋮ Unnamed Item ⋮ Learning with sample dependent hypothesis spaces ⋮ Error bounds of multi-graph regularized semi-supervised classification ⋮ Learning rates of kernel-based robust classification ⋮ The consistency of least-square regularized regression with negative association sequence ⋮ Multi-kernel regularized classifiers ⋮ Learning rates of regularized regression on the unit sphere ⋮ An oracle inequality for regularized risk minimizers with strongly mixing observations ⋮ Kernel-based sparse regression with the correntropy-induced loss ⋮ The learning rate of \(l_2\)-coefficient regularized classification with strong loss ⋮ Optimal learning rates for least squares regularized regression with unbounded sampling ⋮ Generalization bounds of ERM algorithm with Markov chain samples ⋮ Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Kernel-based maximum correntropy criterion with gradient descent method ⋮ On the convergence rate of kernel-based sequential greedy regression ⋮ On the K-functional in learning theory ⋮ Approximation analysis of learning algorithms for support vector regression and quantile regression ⋮ Statistical performance of support vector machines ⋮ Learning rates of regularized regression for exponentially strongly mixing sequence ⋮ Learning from regularized regression algorithms with \(p\)-order Markov chain sampling ⋮ Concentration estimates for learning with unbounded sampling ⋮ Consistency and convergence rate for nearest subspace classifier ⋮ Robust fuzzy rough classifiers ⋮ Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains ⋮ Optimal regression rates for SVMs using Gaussian kernels ⋮ Estimation of convergence rate for multi-regression learning algorithm ⋮ Conditional quantiles with varying Gaussians ⋮ Online learning for quantile regression and support vector regression ⋮ New robust unsupervised support vector machines ⋮ The generalization performance of ERM algorithm with strongly mixing observations ⋮ Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem ⋮ Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮ Convergence rate of the semi-supervised greedy algorithm ⋮ A Note on Support Vector Machines with Polynomial Kernels ⋮ Analysis of Online Composite Mirror Descent Algorithm ⋮ Generalization Analysis of Fredholm Kernel Regularized Classifiers ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ Classification with non-i.i.d. sampling ⋮ Compressed classification learning with Markov chain samples ⋮ Convergence rate of SVM for kernel-based robust regression ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Convergence analysis of online algorithms ⋮ Generalization performance of least-square regularized regression algorithm with Markov chain samples ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Learning rates for kernel-based expectile regression ⋮ Coefficient-based regularization network with variance loss for error ⋮ Estimation of the misclassification error for multicategory support vector machine classification ⋮ Unregularized online learning algorithms with general loss functions ⋮ Perturbation of convex risk minimization and its application in differential private learning algorithms ⋮ Learning performance of regularized regression with multiscale kernels based on Markov observations ⋮ Classification with polynomial kernels and \(l^1\)-coefficient regularization ⋮ On the rate of convergence for multi-category classification based on convex losses ⋮ Parzen windows for multi-class classification ⋮ Learning rates for regularized classifiers using multivariate polynomial kernels ⋮ Learning from dependent observations ⋮ Learning and approximation by Gaussians on Riemannian manifolds ⋮ Convergence rates of learning algorithms by random projection ⋮ The convergence rate for a \(K\)-functional in learning theory ⋮ Support vector machines regression with \(l^1\)-regularizer ⋮ Logistic classification with varying gaussians ⋮ Learning rates for multi-kernel linear programming classifiers ⋮ Learning from non-identical sampling for classification ⋮ Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions ⋮ Learning rates of multi-kernel regularized regression ⋮ The consistency of multicategory support vector machines ⋮ Approximation with polynomial kernels and SVM classifiers ⋮ Unnamed Item ⋮ Convergence of online mirror descent ⋮ Statistical analysis of the moving least-squares method with unbounded sampling ⋮ Unregularized online algorithms with varying Gaussians ⋮ Approximating and learning by Lipschitz kernel on the sphere ⋮ Error analysis of multicategory support vector machine classifiers ⋮ On Reject and Refine Options in Multicategory Classification ⋮ Learning rates of gradient descent algorithm for classification ⋮ The guaranteed estimation of the Lipschitz classifier accuracy: confidence set approach ⋮ Moving quantile regression ⋮ Analysis of regularized least-squares in reproducing kernel Kreĭn spaces ⋮ Analysis of support vector machines regression ⋮ Learning from uniformly ergodic Markov chains ⋮ Regularized ranking with convex losses and \(\ell^1\)-penalty ⋮ Learning rates of regression with q-norm loss and threshold ⋮ Error bounds for learning the kernel ⋮ Online regularized generalized gradient classification algorithms ⋮ Sparse additive machine with ramp loss ⋮ Generalization performance of graph-based semi-supervised classification ⋮ Gradient learning in a classification setting by gradient descent ⋮ Online Classification with Varying Gaussians ⋮ Generalization performance of Gaussian kernels SVMC based on Markov sampling ⋮ Extreme learning machine for ranking: generalization analysis and applications ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Comparison theorems on large-margin learning
This page was built for publication: