Insensitive stochastic gradient twin support vector machines for large scale problems

From MaRDI portal
Publication:2198233

DOI10.1016/J.INS.2018.06.007zbMATH Open1440.68245DBLPjournals/isci/WangSBLLD18arXiv1704.05596OpenAlexW2805876623WikidataQ59305688 ScholiaQ59305688MaRDI QIDQ2198233FDOQ2198233


Authors: Zhen Wang, Yuan-Hai Shao, Lan Bai, Chunna Li, Liming Liu, Nai-Yang Deng Edit this on Wikidata


Publication date: 9 September 2020

Published in: Information Sciences (Search for Journal in Brave)

Abstract: Stochastic gradient descent algorithm has been successfully applied on support vector machines (called PEGASOS) for many classification problems. In this paper, stochastic gradient descent algorithm is investigated to twin support vector machines for classification. Compared with PEGASOS, the proposed stochastic gradient twin support vector machines (SGTSVM) is insensitive on stochastic sampling for stochastic gradient descent algorithm. In theory, we prove the convergence of SGTSVM instead of almost sure convergence of PEGASOS. For uniformly sampling, the approximation between SGTSVM and twin support vector machines is also given, while PEGASOS only has an opportunity to obtain an approximation of support vector machines. In addition, the nonlinear SGTSVM is derived directly from its linear case. Experimental results on both artificial datasets and large scale problems show the stable performance of SGTSVM with a fast learning speed.


Full work available at URL: https://arxiv.org/abs/1704.05596




Recommendations




Cites Work


Cited In (10)

Uses Software





This page was built for publication: Insensitive stochastic gradient twin support vector machines for large scale problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2198233)