SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
From MaRDI portal
Publication:4678449
DOI10.1162/0899766053491896zbMath1108.90324OpenAlexW2167711152MaRDI QIDQ4678449
Publication date: 23 May 2005
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/0899766053491896
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Quadratic programming (90C20) Linear programming (90C05)
Related Items (53)
LEAST SQUARE REGRESSION WITH COEFFICIENT REGULARIZATION BY GRADIENT DESCENT ⋮ Learning with sample dependent hypothesis spaces ⋮ Distributed learning via filtered hyperinterpolation on manifolds ⋮ Multi-kernel regularized classifiers ⋮ Error Analysis of Coefficient-Based Regularized Algorithm for Density-Level Detection ⋮ An oracle inequality for regularized risk minimizers with strongly mixing observations ⋮ Approximation by multivariate Bernstein-Durrmeyer operators and learning rates of least-squares regularized regression with multivariate polynomial kernels ⋮ Generalization and learning rate of multi-class support vector classification and regression ⋮ Least square regression with indefinite kernels and coefficient regularization ⋮ Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Error analysis for coefficient-based regularized regression in additive models ⋮ Optimal convergence rates of deep neural networks in a classification setting ⋮ Consistency and convergence rate for nearest subspace classifier ⋮ Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains ⋮ Quantum-enhanced least-square support vector machine: simplified quantum algorithm and sparse solutions ⋮ The generalization performance of ERM algorithm with strongly mixing observations ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ Guaranteed Classification via Regularized Similarity Learning ⋮ Quantile regression with \(\ell_1\)-regularization and Gaussian kernels ⋮ A Note on Support Vector Machines with Polynomial Kernels ⋮ Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery ⋮ Generalization Analysis of Fredholm Kernel Regularized Classifiers ⋮ Learning Rates for Classification with Gaussian Kernels ⋮ A simpler approach to coefficient regularized support vector machines regression ⋮ A new comparison theorem on conditional quantiles ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Classification with polynomial kernels and \(l^1\)-coefficient regularization ⋮ Learning rates for regularized classifiers using multivariate polynomial kernels ⋮ Learning and approximation by Gaussians on Riemannian manifolds ⋮ Support vector machines regression with \(l^1\)-regularizer ⋮ Logistic classification with varying gaussians ⋮ Learning rates for multi-kernel linear programming classifiers ⋮ Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel ⋮ Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions ⋮ Learning rates of multi-kernel regularized regression ⋮ Learning errors of linear programming support vector regression ⋮ REGULARIZED LEAST SQUARE REGRESSION WITH SPHERICAL POLYNOMIAL KERNELS ⋮ Least Square Regression with lp-Coefficient Regularization ⋮ Approximating and learning by Lipschitz kernel on the sphere ⋮ Error analysis of multicategory support vector machine classifiers ⋮ GENERALIZATION BOUNDS OF REGULARIZATION ALGORITHMS DERIVED SIMULTANEOUSLY THROUGH HYPOTHESIS SPACE COMPLEXITY, ALGORITHMIC STABILITY AND DATA QUALITY ⋮ Analysis of support vector machines regression ⋮ SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS ⋮ Sparse Learning for Large-Scale and High-Dimensional Data: A Randomized Convex-Concave Optimization Approach ⋮ Deep neural networks for rotation-invariance approximation and learning ⋮ Sparse additive machine with ramp loss ⋮ Learning rates of least-square regularized regression with polynomial kernels ⋮ Half supervised coefficient regularization for regression learning with unbounded sampling ⋮ Distributed Filtered Hyperinterpolation for Noisy Data on the Sphere ⋮ An efficient primal dual prox method for non-smooth optimization ⋮ Comparison theorems on large-margin learning ⋮ CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION
Cites Work
- A note on different covering numbers in learning theory.
- The covering number in learning theory
- Support vector machines are universally consistent
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Optimal aggregation of classifiers in statistical learning.
- Regularization networks and support vector machines
- Support vector machines with different norms: motivation, formulations and results
- On the mathematical foundations of learning
- Capacity of reproducing kernel spaces in learning theory
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Improving the sample complexity using global data
- 10.1162/153244302760200704
- 10.1162/153244302760200713
- Shannon sampling and function reconstruction from point values
- Are Loss Functions All the Same?
- Massive data discrimination via linear support vector machines
- Theory of Reproducing Kernels
This page was built for publication: SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming