Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization

From MaRDI portal
Publication:71636

DOI10.48550/ARXIV.2101.11075arXiv2101.11075MaRDI QIDQ71636FDOQ71636


Authors: Aaron Defazio, Samy Jelassi Edit this on Wikidata


Publication date: 26 January 2021

Abstract: We introduce MADGRAD, a novel optimization method in the family of AdaGrad adaptive gradient methods. MADGRAD shows excellent performance on deep learning optimization problems from multiple fields, including classification and image-to-image tasks in vision, and recurrent and bidirectionally-masked models in natural language processing. For each of these tasks, MADGRAD matches or outperforms both SGD and ADAM in test set performance, even on problems for which adaptive methods normally perform poorly.








Cited In (2)





This page was built for publication: Adaptivity without Compromise: A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q71636)