An optimal method for stochastic composite optimization (Q431018): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Changed an Item
ReferenceBot (talk | contribs)
Changed an Item
 
(4 intermediate revisions by 3 users not shown)
Property / describes a project that uses
 
Property / describes a project that uses: SUTIL / rank
 
Normal rank
Property / describes a project that uses
 
Property / describes a project that uses: NESTA / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s10107-010-0434-y / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2024484010 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Interior Gradient and Proximal Methods for Convex and Conic Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Bregman Monotone Optimization Algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Non-Euclidean restricted memory level method for large-scale convex optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3151174 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5580053 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Smooth Optimization with Approximate Gradient / rank
 
Normal rank
Property / cites work
 
Property / cites work: First-Order Methods for Sparse Covariance Selection / rank
 
Normal rank
Property / cites work
 
Property / cites work: stochastic quasigradient methods and their application to system optimization<sup>†</sup> / rank
 
Normal rank
Property / cites work
 
Property / cites work: Recursive aggregation of estimators by the mirror descent algorithm with averaging / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning by mirror averaging / rank
 
Normal rank
Property / cites work
 
Property / cites work: Proximal Minimization Methods with Generalized Bregman Functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Sample Average Approximation Method for Stochastic Discrete Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4421713 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Iteration-complexity of first-order penalty methods for convex programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Iteration-complexity of first-order augmented Lagrangian methods for convex programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: The empirical behavior of sampling methods for stochastic programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Smooth Optimization Approach for Sparse Covariance Selection / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: Large-scale semidefinite programming via a saddle point mirror-prox algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Monte Carlo bounding techniques for determinig solution quality in stochastic programs / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Complexity of the Hybrid Proximal Extragradient Method for the Iterates and the Ergodic Mean / rank
 
Normal rank
Property / cites work
 
Property / cites work: Prox-Method with Rate of Convergence <i>O</i>(1/<i>t</i>) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Robust Stochastic Approximation Approach to Stochastic Programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Introductory lectures on convex optimization. A basic course. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Smooth minimization of non-smooth functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Primal-dual subgradient methods for convex problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Smoothing technique and its applications in semidefinite optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4335417 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Acceleration of Stochastic Approximation by Averaging / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Stochastic Approximation Method / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convex Analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Monte Carlo sampling approach to stochastic programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5494167 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Introduction to Stochastic Search and Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Existence of Probability Measures with Given Marginals / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence of Proximal-Like Algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: The sample average approximation method applied to stochastic routing problems: a computational study / rank
 
Normal rank

Latest revision as of 10:24, 5 July 2024

scientific article
Language Label Description Also known as
English
An optimal method for stochastic composite optimization
scientific article

    Statements

    An optimal method for stochastic composite optimization (English)
    0 references
    0 references
    26 June 2012
    0 references
    The stochastic composite optimization problem under consideration consists in minimizing the sum \(f+h\) of two convex functions \(f,h\) defined on a compact convex set \(X\subset \mathbb{R}^{n}\), assuming that \(\nabla f\) and \(h\) are globally Lipschitz. The author proposes two subgradient-type methods, namely, a modified version of the mirror-descent SA method due to \textit{A. Nemirovski, A. Juditsky, G. Lan} and \textit{A. Shapiro} [SIAM J. Optim. 19, No. 4, 1574--1609 (2009; Zbl 1189.90109)], which substantially improves the rate of convergence, and an accelerated stochastic approximation method, which can achieve an optimal rate of convergence. He illustrates the advantages of the latter method over other existing methods by discussing its performance for a particular class of stochastic optimization problems.
    0 references
    0 references
    0 references
    0 references
    0 references
    stochastic optimization
    0 references
    optimal method
    0 references
    convex optimization
    0 references
    complexity theory
    0 references
    stochastic approximation
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references