A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems
From MaRDI portal
Publication:3731371
DOI10.1007/BFb0121128zbMath0597.90064MaRDI QIDQ3731371
Wojciech Syski, Ruszczyński, Andrzej
Publication date: 1986
Published in: Mathematical Programming Studies (Search for Journal in Brave)
Numerical mathematical programming methods (65K05) Convex programming (90C25) Nonlinear programming (90C30) Stochastic programming (90C15)
Related Items
On convergence of the stochastic subgradient method with on-line stepsize rules, A method of stochastic subgradients with complete feedback stepsize rule for convex stochastic approximation problems, Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming, Differentiability of the integral over a set defined by inclusion, On the information-adaptive variants of the ADMM: an iteration complexity perspective, Algorithms for stochastic optimization with function or expectation constraints, An optimal method for stochastic composite optimization, Penalty methods with stochastic approximation for stochastic nonlinear programming, A sparsity preserving stochastic gradient methods for sparse regression, A numerical method for solving stochastic programming problems with moment constraints on a distribution function, String-averaging incremental stochastic subgradient algorithms, Mean-Variance Risk-Averse Optimal Control of Systems Governed by PDEs with Random Parameter Fields Using Quadratic Approximations, Stochastic quasigradient methods for optimization of discrete event systems, Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization