Accelerated gradient methods with strong convergence to the minimum norm minimizer: a dynamic approach combining time scaling, averaging, and Tikhonov regularization

From MaRDI portal
Publication:6417682

arXiv2211.10140MaRDI QIDQ6417682FDOQ6417682


Authors: Hédy Attouch, Z. Chbani, H. Riahi Edit this on Wikidata


Publication date: 18 November 2022

Abstract: In a Hilbert framework, for convex differentiable optimization, we consider accelerated gradient methods obtained by combining temporal scaling and averaging techniques with Tikhonov regularization. We start from the continuous steepest descent dynamic with an additional Tikhonov regularization term whose coefficient vanishes asymptotically. We provide an extensive Lyapunov analysis of this first-order evolution equation. Then we apply to this dynamic the method of time scaling and averaging recently introduced by Attouch, Bot and Nguyen. We thus obtain an inertial dynamic which involves viscous damping associated with Nesterov's method, implicit Hessian damping and Tikhonov regularization. Under an appropriate setting of the parameters, just using Jensen's inequality, without the need for another Lyapunov analysis, we show that the trajectories have at the same time several remarkable properties: they provide a rapid convergence of values, fast convergence of the gradients to zero, and strong convergence to the minimum norm minimizer. These results complete and improve the previous results obtained by the authors.













This page was built for publication: Accelerated gradient methods with strong convergence to the minimum norm minimizer: a dynamic approach combining time scaling, averaging, and Tikhonov regularization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6417682)