Stochastic dual coordinate ascent methods for regularized loss minimization

From MaRDI portal




Abstract: Stochastic Gradient Descent (SGD) has become popular for solving large scale supervised machine learning optimization problems such as SVM, due to their strong theoretical guarantees. While the closely related Dual Coordinate Ascent (DCA) method has been implemented in various software packages, it has so far lacked good convergence analysis. This paper presents a new analysis of Stochastic Dual Coordinate Ascent (SDCA) showing that this class of methods enjoy strong theoretical guarantees that are comparable or better than SGD. This analysis justifies the effectiveness of SDCA for practical applications.




Cited in
(only showing first 100 items - show all)






This page was built for publication: Stochastic dual coordinate ascent methods for regularized loss minimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5405257)