An optimal randomized incremental gradient method (Q1785198): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Changed an Item
Import241208061232 (talk | contribs)
Normalize DOI.
 
(5 intermediate revisions by 5 users not shown)
Property / DOI
 
Property / DOI: 10.1007/s10107-017-1173-0 / rank
Normal rank
 
Property / describes a project that uses
 
Property / describes a project that uses: Saga / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2964037929 / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: 1507.02000 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On Lower and Upper Bounds for Smooth and Strongly Convex Optimization Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Interior Gradient and Proximal Methods for Convex and Conic Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Bregman Monotone Optimization Algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5580053 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the ergodic convergence rates of a first-order primal-dual algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: A first-order primal-dual algorithm for convex problems with applications to imaging / rank
 
Normal rank
Property / cites work
 
Property / cites work: The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal Primal-Dual Methods for a Class of Saddle Point Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Accelerated gradient methods for nonconvex nonlinear and stochastic programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2753173 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Proximal Minimization Methods with Generalized Bregman Functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: An optimal method for stochastic composite optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Validation analysis of mirror descent stochastic approximation method / rank
 
Normal rank
Property / cites work
 
Property / cites work: Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice / rank
 
Normal rank
Property / cites work
 
Property / cites work: An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Prox-Method with Rate of Convergence <i>O</i>(1/<i>t</i>) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Robust Stochastic Approximation Approach to Stochastic Programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3967358 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Introductory lectures on convex optimization. A basic course. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Smooth minimization of non-smooth functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Gradient methods for minimizing composite functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Unconstrained Convex Minimization in Relative Scale / rank
 
Normal rank
Property / cites work
 
Property / cites work: Erratum to: ``Minimizing finite sums with the stochastic average gradient'' / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization / rank
 
Normal rank
Property / DOI
 
Property / DOI: 10.1007/S10107-017-1173-0 / rank
 
Normal rank

Latest revision as of 11:28, 11 December 2024

scientific article
Language Label Description Also known as
English
An optimal randomized incremental gradient method
scientific article

    Statements

    An optimal randomized incremental gradient method (English)
    0 references
    0 references
    28 September 2018
    0 references
    Let \(X\) be a closed convex subset of \(\mathbb{R}^{n}\). \(\| \cdot \| \) be an arbitrary norm, \(f_{i}:\mathbb{R}^{n}\rightarrow \mathbb{R}\) (\(i=1,\dots,m\)) be \(C^{1,1}\) convex, \(h:X\rightarrow \mathbb{R}\) be convex and ``relatively simple'', \(\mu \geq 0\), and \(\omega :X\rightarrow \mathbb{R}\) be strongly convex with modulus \(1\), in the sense that \(\left\langle \nabla\omega (x_{1})-\nabla \omega (x_{2}),x_{1}-x_{2}\right\rangle \geq \frac{1}{2}\| x_{1}-x_{2}\| ^{2}\) \(\forall x_{1},x_{2}\in X.\) To solve the problem of minimizing \(\frac{1}{m}\sum_{i=1}^{m}f_{i}(x)+h(x)+\mu \omega (x)\) over \(X\), the authors propose two different reformulations as saddle point problems; the primal-dual gradient methods devised to solve them use, at each iteration, the gradient of one randomly selected \(f_{i}\). The convergence properties of these methods are established.
    0 references
    complexity
    0 references
    convex optimization
    0 references
    incremental gradient method
    0 references
    primal-dual gradient method
    0 references
    0 references
    0 references
    0 references

    Identifiers