An optimal randomized incremental gradient method (Q1785198)

From MaRDI portal





scientific article
Language Label Description Also known as
default for all languages
No label defined
    English
    An optimal randomized incremental gradient method
    scientific article

      Statements

      An optimal randomized incremental gradient method (English)
      0 references
      0 references
      28 September 2018
      0 references
      Let \(X\) be a closed convex subset of \(\mathbb{R}^{n}\). \(\| \cdot \| \) be an arbitrary norm, \(f_{i}:\mathbb{R}^{n}\rightarrow \mathbb{R}\) (\(i=1,\dots,m\)) be \(C^{1,1}\) convex, \(h:X\rightarrow \mathbb{R}\) be convex and ``relatively simple'', \(\mu \geq 0\), and \(\omega :X\rightarrow \mathbb{R}\) be strongly convex with modulus \(1\), in the sense that \(\left\langle \nabla\omega (x_{1})-\nabla \omega (x_{2}),x_{1}-x_{2}\right\rangle \geq \frac{1}{2}\| x_{1}-x_{2}\| ^{2}\) \(\forall x_{1},x_{2}\in X.\) To solve the problem of minimizing \(\frac{1}{m}\sum_{i=1}^{m}f_{i}(x)+h(x)+\mu \omega (x)\) over \(X\), the authors propose two different reformulations as saddle point problems; the primal-dual gradient methods devised to solve them use, at each iteration, the gradient of one randomly selected \(f_{i}\). The convergence properties of these methods are established.
      0 references
      complexity
      0 references
      convex optimization
      0 references
      incremental gradient method
      0 references
      primal-dual gradient method
      0 references
      0 references
      0 references
      0 references

      Identifiers