Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity
From MaRDI portal
Publication:4594841
Abstract: Many recent applications in machine learning and data fitting call for the algorithmic solution of structured smooth convex optimization problems. Although the gradient descent method is a natural choice for this task, it requires exact gradient computations and hence can be inefficient when the problem size is large or the gradient is difficult to evaluate. Therefore, there has been much interest in inexact gradient methods (IGMs), in which an efficiently computable approximate gradient is used to perform the update in each iteration. Currently, non-asymptotic linear convergence results for IGMs are typically established under the assumption that the objective function is strongly convex, which is not satisfied in many applications of interest; while linear convergence results that do not require the strong convexity assumption are usually asymptotic in nature. In this paper, we combine the best of these two types of results and establish---under the standard assumption that the gradient approximation errors decrease linearly to zero---the non-asymptotic linear convergence of IGMs when applied to a class of structured convex optimization problems. Such a class covers settings where the objective function is not necessarily strongly convex and includes the least squares and logistic regression problems. We believe that our techniques will find further applications in the non-asymptotic convergence analysis of other first-order methods.
Recommendations
- Asymptotic estimates for \(r\)-Whitney numbers of the second kind
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Convergence rates for the stochastic gradient descent method for non-convex objective functions
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Inexact SARAH algorithm for stochastic optimization
Cited in
(15)- Hölderian Error Bounds and Kurdyka-Łojasiewicz Inequality for the Trust Region Subproblem
- Convergence analysis of inexact randomized iterative methods
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- A linearly convergent stochastic recursive gradient method for convex optimization
- On the nonergodic convergence rate of an inexact augmented Lagrangian framework for composite convex programming
- Inexact SARAH algorithm for stochastic optimization
- Asymptotic estimates for \(r\)-Whitney numbers of the second kind
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- On the estimation performance and convergence rate of the generalized power method for phase synchronization
- Inexact gradient projection method with relative error tolerance
- From inexact optimization to learning via gradient concentration
- Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods
- A frequency-domain analysis of inexact gradient methods
- Incremental without replacement sampling in nonconvex optimization
- Accelerating incremental gradient optimization with curvature information
This page was built for publication: Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4594841)