Accelerated Gradient Methods with Memory

From MaRDI portal
Publication:6301989

arXiv1805.09077MaRDI QIDQ6301989FDOQ6301989


Authors: Ross Drummond, Stephen R. Duncan Edit this on Wikidata


Publication date: 23 May 2018

Abstract: A set of accelerated first order algorithms with memory are proposed for minimising strongly convex functions. The algorithms are differentiated by their use of the iterate history for the gradient step. The increased convergence rate of the proposed algorithms comes at the cost of robustness, a problem that is resolved by a switching controller based upon adaptive restarting. Several numerical examples highlight the benefits of the proposed approach over the fast gradient method. For example, it is shown that these gradient based methods can minimise the Rosenbrock banana function to 7.58imes1012 in 43 iterations from an initial condition of (1,1).













This page was built for publication: Accelerated Gradient Methods with Memory

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6301989)