SAN: Stochastic Average Newton Algorithm for Minimizing Finite Sums

From MaRDI portal
Publication:6370733

arXiv2106.10520MaRDI QIDQ6370733FDOQ6370733


Authors: Jiabin Chen, Rui Yuan, Guillaume Garrigos, Robert M. Gower Edit this on Wikidata


Publication date: 19 June 2021

Abstract: We present a principled approach for designing stochastic Newton methods for solving finite sum optimization problems. Our approach has two steps. First, we re-write the stationarity conditions as a system of nonlinear equations that associates each data point to a new row. Second, we apply a Subsampled Newton Raphson method to solve this system of nonlinear equations. Using our approach, we develop a new Stochastic Average Newton (SAN) method, which is incremental by design, in that it requires only a single data point per iteration. It is also cheap to implement when solving regularized generalized linear models, with a cost per iteration of the order of the number of the parameters. We show through extensive numerical experiments that SAN requires no knowledge about the problem, neither parameter tuning, while remaining competitive as compared to classical variance reduced gradient methods (e.g. SAG and SVRG), incremental Newton and quasi-Newton methods (e.g. SNM, IQN).




Has companion code repository: https://github.com/nathansiae/Stochastic-Average-Newton









This page was built for publication: SAN: Stochastic Average Newton Algorithm for Minimizing Finite Sums

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6370733)