On the Variance of the Adaptive Learning Rate and Beyond

From MaRDI portal
Publication:71647

DOI10.48550/ARXIV.1908.03265arXiv1908.03265MaRDI QIDQ71647FDOQ71647


Authors: Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Jiawei Han Edit this on Wikidata


Publication date: 8 August 2019

Abstract: The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Here, we study its mechanism in details. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate (i.e., it has problematically large variance in the early stage), suggest warmup works as a variance reduction technique, and provide both empirical and theoretical evidence to verify our hypothesis. We further propose RAdam, a new variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Extensive experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the effectiveness and robustness of our proposed method. All implementations are available at: https://github.com/LiyuanLucasLiu/RAdam.








Cited In (1)





This page was built for publication: On the Variance of the Adaptive Learning Rate and Beyond

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q71647)