Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization

From MaRDI portal
Publication:5962715

DOI10.1007/S10107-014-0839-0zbMATH Open1342.90103arXiv1309.2375OpenAlexW2118545728MaRDI QIDQ5962715FDOQ5962715


Authors: Shai Shalev-Shwartz, Tong Zhang Edit this on Wikidata


Publication date: 23 February 2016

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Abstract: We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.


Full work available at URL: https://arxiv.org/abs/1309.2375




Recommendations




Cites Work


Cited In (74)

Uses Software





This page was built for publication: Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5962715)