Incremental gradient algorithms with stepsizes bounded away from zero (Q1273418)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Incremental gradient algorithms with stepsizes bounded away from zero |
scientific article |
Statements
Incremental gradient algorithms with stepsizes bounded away from zero (English)
0 references
15 June 1999
0 references
For the problem \[ \text{minimize}\quad \sum^k_{j= 1} f_j(x)\quad\text{subject to}\quad x\in\mathbb{R}^n, \] where \(f_1,\dots, f_k: \mathbb{R}^n\to \mathbb{R}\) are continuously differentiable functions, the author proposes the following incremental gradient algorithm: Choose any \(x^0\in \mathbb{R}^n\). Having \(x^i\), check a stopping criterion. If not satisfied, compute \(x^{i+ 1}= T(x^i,\eta_i)\), where \(T:\mathbb{R}^n\times \mathbb{R}^+\to \mathbb{R}^n\) is given by \[ T(x,\eta):= x-\eta \sum^k_{j=1}\nabla f_j(z^j) \] and \[ z^1= x,\quad z^{j+1}= z^j- \eta\nabla f_j(z^j),\quad j=1,\dots, k-1. \] The author presents convergence results for a class of incremental gradient algorithms with stepsizes bounded away from zero. Applications on neural network training are discussed.
0 references
incremental gradient algorithm
0 references
convergence
0 references
neural network training
0 references