New analysis of linear convergence of gradient-type methods via unifying error bound conditions (Q2297652)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | New analysis of linear convergence of gradient-type methods via unifying error bound conditions |
scientific article |
Statements
New analysis of linear convergence of gradient-type methods via unifying error bound conditions (English)
0 references
20 February 2020
0 references
Unconstrained optimization problems in $\mathbb R^n$ are considered with the corresponding gradient descent method with constant step size. Residual measure operators for a given function are defined, and some related error bounds conditions are introduced. The interplay, or the equivalence, among such conditions, are discussed. By using the negative of residual measure operator as a descent direction, an abstract gradient-type method is defined, and its linear convergence is proved. Then necessary and sufficient conditions are obtained for linear convergence of the gradient method, the proximal point algorithm and the forward-backward splitting algorithm, also for possibly non-convex problems. Linear convergence is also shown for proximal alternating linearized minimization and Nesterov's accelerated forward-backward algorithm.
0 references
residual measure operator
0 references
error bound
0 references
gradient descent
0 references
linear convergence
0 references
proximal point algorithm
0 references
forward-backward splitting algorithm
0 references
proximal alternating linearized minimization
0 references
Nesterov's acceleration
0 references
dual objective function
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references