Nonlinear conjugate gradient for smooth convex functions
From MaRDI portal
Publication:6507664
arXiv2111.11613MaRDI QIDQ6507664FDOQ6507664
Authors: Sahar Karimi, Stephen A. Vavasis
Abstract: The method of nonlinear conjugate gradients (NCG) is widely used in practice for unconstrained optimization, but it satisfies weak complexity bounds at best when applied to smooth convex functions. In contrast, Nesterov's accelerated gradient (AG) method is optimal up to constant factors for this class. However, when specialized to quadratic function, conjugate gradient is optimal in a strong sense among function-gradient methods. Therefore, there is seemingly a gap in the menu of available algorithms: NCG, the optimal algorithm for quadratic functions that also exhibits good practical performance for general functions, has poor complexity bounds compared to AG. We propose an NCG method called C+AG ("conjugate plus accelerated gradient") to close this gap, that is, it is optimal for quadratic functions and still satisfies the best possible complexity bound for more general smooth convex functions. It takes conjugate gradient steps until insufficient progress is made, at which time it switches to accelerated gradient steps, and later retries conjugate gradient. The proposed method has the following theoretical properties: (i) It is identical to linear conjugate gradient (and hence terminates finitely) if the objective function is quadratic; (ii) Its running-time bound is gradient evaluations for an -smooth convex function, where is the desired residual reduction, (iii) Its running-time bound is if the function is both -smooth and -strongly convex. In computational tests, the function-gradient evaluation count for the C+AG method typically behaves as whichever is better of AG or classical NCG. In some test cases it outperforms both.
This page was built for publication: Nonlinear conjugate gradient for smooth convex functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6507664)