Algorithms for Sparse Support Vector Machines
From MaRDI portal
Publication:6180740
Abstract: Many problems in classification involve huge numbers of irrelevant features. Model selection reveals the crucial features, reduces the dimensionality of feature space, and improves model interpretation. In the support vector machine literature, model selection is achieved by penalties. These convex relaxations seriously bias parameter estimates toward 0 and tend to admit too many irrelevant features. The current paper presents an alternative that replaces penalties by sparse-set constraints. Penalties still appear, but serve a different purpose. The proximal distance principle takes a loss function and adds the penalty capturing the squared Euclidean distance of the parameter vector to the sparsity set where at most components of are nonzero. If represents the minimum of the objective , then tends to the constrained minimum of over as tends to . We derive two closely related algorithms to carry out this strategy. Our simulated and real examples vividly demonstrate how the algorithms achieve much better sparsity without loss of classification power.
Cites work
- scientific article; zbMATH DE number 6670750 (Why is no real title available?)
- scientific article; zbMATH DE number 1810276 (Why is no real title available?)
- scientific article; zbMATH DE number 3912096 (Why is no real title available?)
- 10.1162/15324430152748218
- An algorithmic approach to nonlinear analysis and optimization
- Computational learning theory. 14th annual conference, COLT 2001, and 5th European conference, EuroCOLT 2001, Amsterdam, Netherlands, July 16--19, 2001. Proceedings
- Distance majorization and its applications
- Incremental support vector learning: analysis, implementation and applications
- LIBLINEAR: a library for large linear classification
- MM optimization algorithms
- Multicategory vertex discriminant analysis for high-dimensional data
- Proximal distance algorithms: theory and practice
- Regularization, optimization, kernels, and support vector machines
- SVM-Maj: a majorization approach to linear support vector machines with different hinge errors
- Safe Feature Elimination in Sparse Supervised Learning
- Some results on Tchebycheffian spline functions and stochastic processes
- Strong Rules for Discarding Predictors in Lasso-Type Problems
- Support-vector networks
- Training invariant support vector machines
- Variational methods for the solution of problems of equilibrium and vibrations
This page was built for publication: Algorithms for Sparse Support Vector Machines
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6180740)