Pegasos: primal estimated sub-gradient solver for SVM
From MaRDI portal
Publication:633112
DOI10.1007/S10107-010-0420-4zbMATH Open1211.90239DBLPjournals/mp/Shalev-ShwartzSSC11OpenAlexW2125993116WikidataQ56095187 ScholiaQ56095187MaRDI QIDQ633112FDOQ633112
Authors: Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, Andrew Cotter
Publication date: 31 March 2011
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-010-0420-4
Recommendations
- Stochastic subgradient estimation training for support vector machines
- Training a Support Vector Machine in the Primal
- A linear support vector machine solver for a large number of training examples
- Large-scale machine learning with stochastic gradient descent
- Insensitive stochastic gradient twin support vector machines for large scale problems
Cites Work
- Pegasos: primal estimated sub-gradient solver for SVM
- Title not available (Why is that?)
- 10.1162/15324430260185619
- Some results on Tchebycheffian spline functions and stochastic processes
- An introduction to support vector machines and other kernel-based learning methods.
- Primal-dual subgradient methods for convex problems
- Title not available (Why is that?)
- Title not available (Why is that?)
- Convex Analysis
- Title not available (Why is that?)
- Introduction to Stochastic Search and Optimization
- Title not available (Why is that?)
- Large margin classification using the perceptron algorithm
- Title not available (Why is that?)
- Logarithmic Regret Algorithms for Online Convex Optimization
- Title not available (Why is that?)
- On the Generalization Ability of On-Line Learning Algorithms
- Online Learning with Kernels
- Fast kernel classifiers with online and active learning
- Title not available (Why is that?)
- Training a Support Vector Machine in the Primal
- Statistical analysis of learning dynamics
- QP algorithms with guaranteed accuracy and run time for support vector machines
Cited In (only showing first 100 items - show all)
- Stochastic approximation with discontinuous dynamics, differential inclusions, and applications
- Decentralized hierarchical constrained convex optimization
- Online strongly convex optimization with unknown delays
- Nonlinear optimization and support vector machines
- Title not available (Why is that?)
- Bilevel hyperparameter optimization for support vector classification: theoretical analysis and a solution method
- On the perceptron's compression
- On stochastic accelerated gradient with convergence rate of regression learning
- Evaluation of stochastic approximation algorithm and variants for learning support vector machines
- New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization
- On stochastic accelerated gradient with convergence rate
- Optimal Convergence Rates for the Proximal Bundle Method
- Object tracking by incremental structural learning of deformable parts
- A reduced proximal-point homotopy method for large-scale non-convex BQP
- Approximation vector machines for large-scale online learning
- Semi-discrete optimal transport: hardness, regularization and numerical solution
- Weighted SGD for \(\ell_p\) regression with randomized preconditioning
- Large-scale linear rankSVM
- A nearest-neighbor search model for distance metric learning
- Stochastic subgradient descent method for large-scale robust chance-constrained support vector machines
- Kernel-based online regression with canal loss
- A primal sub-gradient method for structured classification with the averaged sum loss
- Stochastic proximal linear method for structured non-convex problems
- Robust cost-sensitive kernel method with Blinex loss and its applications in credit risk evaluation
- The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning
- Incremental accelerated gradient methods for SVM classification: study of the constrained approach
- Block stochastic gradient iteration for convex and nonconvex optimization
- Convergence rates for deterministic and stochastic subgradient methods without Lipschitz continuity
- A novel twin parametric support vector machine for large scale problem
- Julia language in machine learning: algorithms, applications, and open issues
- Subgradient-based neural network for nonconvex optimization problems in support vector machines with indefinite kernels
- Nested cross-validation with ensemble feature selection and classification model for high-dimensional biological data
- Convergence of unregularized online learning algorithms
- Inexact stochastic subgradient projection method for stochastic equilibrium problems with nonmonotone bifunctions: application to expected risk minimization in machine learning
- A data efficient and feasible level set method for stochastic convex optimization with expectation constraints
- Block mirror stochastic gradient method for stochastic optimization
- An efficient augmented Lagrangian method for support vector machine
- Online active classification via margin-based and feature-based label queries
- Adaptive proximal SGD based on new estimating sequences for sparser ERM
- SVM via saddle point optimization: new bounds and distributed algorithms
- A hybrid acceleration strategy for nonparallel support vector machine
- Random-reshuffled SARAH does not need full gradient computations
- Making the last iterate of SGD information theoretically optimal
- New machine-learning algorithms for prediction of Parkinson's disease
- Relatively-paired space analysis: learning a latent common space from relatively-paired observations
- Improving kernel online learning with a snapshot memory
- New smoothing SVM algorithm with tight error bound and efficient reduced techniques
- Bridging the gap between constant step size stochastic gradient descent and Markov chains
- Best subset selection for high-dimensional non-smooth models using iterative hard thresholding
- Dynamical memory control based on projection technique for online regression
- Periodic step-size adaptation in second-order gradient descent for single-pass on-line structured learning
- Constraint learning: an appetizer
- Batched Stochastic Gradient Descent with Weighted Sampling
- Online training on a budget of support vector machines using twin prototypes
- Robust echo state network with sparse online learning
- An efficient method for clustered multi-metric learning
- How effectively train large-scale machine learning models?
- Scaling up sparse support vector machines by simultaneous feature and sample reduction
- Insensitive stochastic gradient twin support vector machines for large scale problems
- Apportioned margin approach for cost sensitive large margin classifiers
- Nonlinear optimization and support vector machines
- Algorithms for stochastic optimization with function or expectation constraints
- An algebraic characterization of the optimum of regularized kernel methods
- A linear support vector machine solver for a large number of training examples
- Supervised classification and mathematical optimization
- Spectral projected subgradient method for nonsmooth convex optimization problems
- Large-scale machine learning with stochastic gradient descent
- Large-scale linear nonparallel support vector machine solver
- Two new decomposition algorithms for training bound-constrained support vector machines
- Stochastic forward-backward splitting for monotone inclusions
- Image classification with the Fisher vector: theory and practice
- Classification of high-dimensional evolving data streams via a resource-efficient online ensemble
- Fast and strong convergence of online learning algorithms
- Incremental learning for \(\nu\)-support vector regression
- Linear classifiers are nearly optimal when hidden variables have diverse effects
- Proximal Gradient Methods for Machine Learning and Imaging
- On data preconditioning for regularized loss minimization
- One-pass AUC optimization
- Soft margin support vector classification as buffered probability minimization
- Coordinate descent with arbitrary sampling. I: Algorithms and complexity.
- Optimization methods for large-scale machine learning
- Sparse classification: a scalable discrete optimization perspective
- A comparative study on large scale kernelized support vector machines
- An incremental subgradient method on Riemannian manifolds
- Binary vectors for fast distance and similarity estimation
- Stochastic subgradient estimation training for support vector machines
- Breaking the curse of kernelization: budgeted stochastic gradient descent for large-scale SVM training
- Fast structured prediction using large margin sigmoid belief networks
- A modified finite Newton method for fast solution of large scale linear SVMs
- Coordinate descent method for large-scale L2-loss linear support vector machines
- Incremental proximal methods for large scale convex optimization
- Hierarchical linear support vector machine
- Hyper-parameter optimization for support vector machines using stochastic gradient descent and dual coordinate descent
- Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
- The incremental subgradient methods on distributed estimations in-network
- Analysis of loss functions in support vector machines
- Training parsers by inverse reinforcement learning
- Training Lp norm multiple kernel learning in the primal
- Cutting-plane training of structural SVMs
- A deterministic rescaled perceptron algorithm
Uses Software
This page was built for publication: Pegasos: primal estimated sub-gradient solver for SVM
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q633112)