Proximal Stochastic Newton-type Gradient Descent Methods for Minimizing Regularized Finite Sums
From MaRDI portal
Publication:6254535
Authors: Ziqiang Shi
Publication date: 10 September 2014
Abstract: In this work, we generalized and unified recent two completely different works of Jascha cite{sohl2014fast} and Lee cite{lee2012proximal} respectively into one by proposing the extbf{prox}imal s extbf{to}chastic extbf{N}ewton-type gradient (PROXTONE) method for optimizing the sums of two convex functions: one is the average of a huge number of smooth convex functions, and the other is a non-smooth convex function. While a set of recently proposed proximal stochastic gradient methods, include MISO, Prox-SDCA, Prox-SVRG, and SAG, converge at linear rates, the PROXTONE incorporates second order information to obtain stronger convergence results, that it achieves a linear convergence rate not only in the value of the objective function, but also in the emph{solution}. The proof is simple and intuitive, and the results and technique can be served as a initiate for the research on the proximal stochastic methods that employ second order information.
This page was built for publication: Proximal Stochastic Newton-type Gradient Descent Methods for Minimizing Regularized Finite Sums
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6254535)