On large batch training and sharp minima: a Fokker-Planck perspective

From MaRDI portal
Publication:828491

DOI10.1007/S42519-020-00120-9zbMATH Open1451.90104arXiv2112.00987OpenAlexW3043999652MaRDI QIDQ828491FDOQ828491


Authors: Xiaowu Dai, Yuhua Zhu Edit this on Wikidata


Publication date: 8 January 2021

Published in: Journal of Statistical Theory and Practice (Search for Journal in Brave)

Abstract: We study the statistical properties of the dynamic trajectory of stochastic gradient descent (SGD). We approximate the mini-batch SGD and the momentum SGD as stochastic differential equations (SDEs). We exploit the continuous formulation of SDE and the theory of Fokker-Planck equations to develop new results on the escaping phenomenon and the relationship with large batch and sharp minima. In particular, we find that the stochastic process solution tends to converge to flatter minima regardless of the batch size in the asymptotic regime. However, the convergence rate is rigorously proven to depend on the batch size. These results are validated empirically with various datasets and models.


Full work available at URL: https://arxiv.org/abs/2112.00987




Recommendations




Cites Work


Cited In (4)

Uses Software





This page was built for publication: On large batch training and sharp minima: a Fokker-Planck perspective

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q828491)