Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity
From MaRDI portal
Publication:4594841
DOI10.1080/10556788.2017.1296439zbMATH Open1382.90079arXiv1309.0113OpenAlexW2962834995MaRDI QIDQ4594841FDOQ4594841
Anthony Man-Cho So, Zirui Zhou
Publication date: 24 November 2017
Published in: Optimization Methods \& Software (Search for Journal in Brave)
Abstract: Many recent applications in machine learning and data fitting call for the algorithmic solution of structured smooth convex optimization problems. Although the gradient descent method is a natural choice for this task, it requires exact gradient computations and hence can be inefficient when the problem size is large or the gradient is difficult to evaluate. Therefore, there has been much interest in inexact gradient methods (IGMs), in which an efficiently computable approximate gradient is used to perform the update in each iteration. Currently, non-asymptotic linear convergence results for IGMs are typically established under the assumption that the objective function is strongly convex, which is not satisfied in many applications of interest; while linear convergence results that do not require the strong convexity assumption are usually asymptotic in nature. In this paper, we combine the best of these two types of results and establish---under the standard assumption that the gradient approximation errors decrease linearly to zero---the non-asymptotic linear convergence of IGMs when applied to a class of structured convex optimization problems. Such a class covers settings where the objective function is not necessarily strongly convex and includes the least squares and logistic regression problems. We believe that our techniques will find further applications in the non-asymptotic convergence analysis of other first-order methods.
Full work available at URL: https://arxiv.org/abs/1309.0113
logistic regressionleast squares regressioninexact gradient methodglobal error boundnon-asymptotic convergence rate
Cited In (10)
- On the Nonergodic Convergence Rate of an Inexact Augmented Lagrangian Framework for Composite Convex Programming
- Accelerating incremental gradient optimization with curvature information
- Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods
- Convergence Analysis of Inexact Randomized Iterative Methods
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- On the Estimation Performance and Convergence Rate of the Generalized Power Method for Phase Synchronization
- Hölderian Error Bounds and Kurdyka-Łojasiewicz Inequality for the Trust Region Subproblem
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Inexact gradient projection method with relative error tolerance
- A linearly convergent stochastic recursive gradient method for convex optimization
This page was built for publication: Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4594841)