Projected semi-stochastic gradient descent method with mini-batch scheme under weak strong convexity assumption
From MaRDI portal
Abstract: We propose a projected semi-stochastic gradient descent method with mini-batch for improving both the theoretical complexity and practical performance of the general stochastic gradient descent method (SGD). We are able to prove linear convergence under weak strong convexity assumption. This requires no strong convexity assumption for minimizing the sum of smooth convex functions subject to a compact polyhedral set, which remains popular across machine learning community. Our PS2GD preserves the low-cost per iteration and high optimization accuracy via stochastic gradient variance-reduced technique, and admits a simple parallel implementation with mini-batches. Moreover, PS2GD is also applicable to dual problem of SVM with hinge loss.
Recommendations
- Semi-stochastic coordinate descent
- A proximal stochastic gradient method with progressive variance reduction
- Asymptotic estimates for \(r\)-Whitney numbers of the second kind
- Stochastic model-based minimization of weakly convex functions
- Improved SVRG for finite sum structure optimization with application to binary classification
Cited in
(2)
This page was built for publication: Projected semi-stochastic gradient descent method with mini-batch scheme under weak strong convexity assumption
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1695084)