Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization (Q5962715): Difference between revisions
From MaRDI portal
Latest revision as of 11:10, 11 July 2024
scientific article; zbMATH DE number 6544654
Language | Label | Description | Also known as |
---|---|---|---|
English | Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization |
scientific article; zbMATH DE number 6544654 |
Statements
Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization (English)
0 references
23 February 2016
0 references
A minimization problem related to the machine learning is considered. The objective function is a regularized loss function obtained by adding a regularizer to the convex loss function. A new version of the stochastic dual coordinate ascent method is proposed and its high convergence rate is proven. This result enables improvement of the key machine learning algorithms including support vector machines (SVM), ridge regression, Lasso, and multiclass SVM. The experimental results are included which corroborate the theoretical findings.
0 references
stochastic optimization
0 references
machine learning
0 references
0 references
0 references