Fast convex optimization via time scale and averaging of the steepest descent

From MaRDI portal
Publication:6408096

arXiv2208.08260MaRDI QIDQ6408096FDOQ6408096


Authors: Hédy Attouch, Radu I. Boţ, Dang-Khoa Nguyen Edit this on Wikidata


Publication date: 17 August 2022

Abstract: In a Hilbert setting, we develop a gradient-based dynamic approach for fast solving convex optimization problems. By applying time scaling, averaging, and perturbation techniques to the continuous steepest descent (SD), we obtain high-resolution ODEs of the Nesterov and Ravine methods. These dynamics involve asymptotically vanishing viscous damping and Hessian driven damping (either in explicit or implicit form). Mathematical analysis does not require developing a Lyapunov analysis for inertial systems. We simply exploit classical convergence results for (SD) and its external perturbation version, then use tools of differential and integral calculus, including Jensen's inequality. The method is flexible and by way of illustration we show how it applies starting from other important dynamics in optimization. We consider the case where the initial dynamics is the regularized Newton method, then the case where the starting dynamics is the differential inclusion associated with a convex lower semicontinuous potential, and finally we show that the technique can be naturally extended to the case of a monotone cocoercive operator. Our approach leads to parallel algorithmic results, which we study in the case of fast gradient and proximal algorithms. Our averaging technique shows new links between the Nesterov and Ravine methods.













This page was built for publication: Fast convex optimization via time scale and averaging of the steepest descent

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6408096)