Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization (Q5962715)
From MaRDI portal
![]() | This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization |
scientific article; zbMATH DE number 6544654
Language | Label | Description | Also known as |
---|---|---|---|
English | Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization |
scientific article; zbMATH DE number 6544654 |
Statements
Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization (English)
0 references
23 February 2016
0 references
A minimization problem related to the machine learning is considered. The objective function is a regularized loss function obtained by adding a regularizer to the convex loss function. A new version of the stochastic dual coordinate ascent method is proposed and its high convergence rate is proven. This result enables improvement of the key machine learning algorithms including support vector machines (SVM), ridge regression, Lasso, and multiclass SVM. The experimental results are included which corroborate the theoretical findings.
0 references
stochastic optimization
0 references
machine learning
0 references