Potential-based analyses of first-order methods for constrained and composite optimization
From MaRDI portal
Publication:6315937
arXiv1903.08497MaRDI QIDQ6315937FDOQ6315937
Authors: Courtney Paquette, Stephen A. Vavasis
Publication date: 20 March 2019
Abstract: We propose potential-based analyses for first-order algorithms applied to constrained and composite minimization problems. We first propose ``idealized frameworks for algorithms in the strongly and non-strongly convex cases and argue based on a potential that methods following the framework achieve the best possible rate. Then we show that the geometric descent (GD) algorithm by Bubeck et al. as extended to the constrained and composite setting by Chen et al. achieves this rate using the potential-based analysis for the strongly convex case. Next, we extend the GD algorithm to the case of non-strongly convex problems. We show using a related potential-based argument that our extension achieves the best possible rate in this case as well. The new GD algorithm achieves the best possible rate in the nonconvex case also. We also analyze accelerated gradient using the new potentials. We then turn to the special case of a quadratic function with a single ball constraint, the famous trust-region subproblem. For this case, the first-order trust-region Lanczos method by Gould et al. finds the optimal point in an increasing sequence of Krylov spaces. Our results for the general case immediately imply convergence rates for their method in both the strongly convex and non-strongly convex cases. We also establish the same convergence rates for their method using arguments based on Chebyshev polynomial approximation. To the best of our knowledge, no convergence rate has previously been established for the trust-region Lanczos method.
This page was built for publication: Potential-based analyses of first-order methods for constrained and composite optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6315937)