An optimal randomized incremental gradient method (Q1785198): Difference between revisions

From MaRDI portal
ReferenceBot (talk | contribs)
Changed an Item
Import241208061232 (talk | contribs)
Normalize DOI.
 
Property / DOI
 
Property / DOI: 10.1007/s10107-017-1173-0 / rank
Normal rank
 
Property / DOI
 
Property / DOI: 10.1007/S10107-017-1173-0 / rank
 
Normal rank

Latest revision as of 11:28, 11 December 2024

scientific article
Language Label Description Also known as
English
An optimal randomized incremental gradient method
scientific article

    Statements

    An optimal randomized incremental gradient method (English)
    0 references
    0 references
    28 September 2018
    0 references
    Let \(X\) be a closed convex subset of \(\mathbb{R}^{n}\). \(\| \cdot \| \) be an arbitrary norm, \(f_{i}:\mathbb{R}^{n}\rightarrow \mathbb{R}\) (\(i=1,\dots,m\)) be \(C^{1,1}\) convex, \(h:X\rightarrow \mathbb{R}\) be convex and ``relatively simple'', \(\mu \geq 0\), and \(\omega :X\rightarrow \mathbb{R}\) be strongly convex with modulus \(1\), in the sense that \(\left\langle \nabla\omega (x_{1})-\nabla \omega (x_{2}),x_{1}-x_{2}\right\rangle \geq \frac{1}{2}\| x_{1}-x_{2}\| ^{2}\) \(\forall x_{1},x_{2}\in X.\) To solve the problem of minimizing \(\frac{1}{m}\sum_{i=1}^{m}f_{i}(x)+h(x)+\mu \omega (x)\) over \(X\), the authors propose two different reformulations as saddle point problems; the primal-dual gradient methods devised to solve them use, at each iteration, the gradient of one randomly selected \(f_{i}\). The convergence properties of these methods are established.
    0 references
    complexity
    0 references
    convex optimization
    0 references
    incremental gradient method
    0 references
    primal-dual gradient method
    0 references
    0 references
    0 references
    0 references

    Identifiers