Implementation of Stochastic Quasi-Newton's Method in PyTorch

From MaRDI portal
Publication:6301248

arXiv1805.02338MaRDI QIDQ6301248FDOQ6301248


Authors: Yingkai Li, Huidong Liu Edit this on Wikidata


Publication date: 7 May 2018

Abstract: In this paper, we implement the Stochastic Damped LBFGS (SdLBFGS) for stochastic non-convex optimization. We make two important modifications to the original SdLBFGS algorithm. First, by initializing the Hessian at each step using an identity matrix, the algorithm converges better than original algorithm. Second, by performing direction normalization we could gain stable optimization procedure without line search. Experiments on minimizing a 2D non-convex function shows that our improved algorithm converges better than original algorithm, and experiments on the CIFAR10 and MNIST datasets show that our improved algorithm works stably and gives comparable or even better testing accuracies than first order optimizers SGD, Adagrad, and second order optimizers LBFGS in PyTorch.




Has companion code repository: https://github.com/harryliew/SdLBFGS









This page was built for publication: Implementation of Stochastic Quasi-Newton's Method in PyTorch

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6301248)