Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization

From MaRDI portal
Publication:2945126

DOI10.1137/140983938zbMath1342.93125arXiv1408.2597OpenAlexW2963264932MaRDI QIDQ2945126

Wotao Yin, Yang-yang Xu

Publication date: 9 September 2015

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1408.2597



Related Items

Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes, On Hermite-Hadamard type inequalities for \(n \)-polynomial convex stochastic processes, Inertial stochastic PALM and applications in machine learning, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Unnamed Item, Randomized Kaczmarz with averaging, Scalable subspace methods for derivative-free nonlinear least-squares optimization, New stochastic fractional integral and related inequalities of Jensen-Mercer and Hermite-Hadamard-Mercer type for convex stochastic processes, Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization, Block Policy Mirror Descent, On the convergence of asynchronous parallel iteration with unbounded delays, On Synchronous, Asynchronous, and Randomized Best-Response Schemes for Stochastic Nash Games, Block mirror stochastic gradient method for stochastic optimization, Proximal gradient method with extrapolation and line search for a class of non-convex and non-smooth problems, Two stochastic optimization algorithms for convex optimization with fixed point constraints, Stochastic Model-Based Minimization of Weakly Convex Functions, Primal-dual block-proximal splitting for a class of non-convex problems, Asynchronous Schemes for Stochastic and Misspecified Potential Games and Nonconvex Optimization, Decomposition Methods for Computing Directional Stationary Solutions of a Class of Nonsmooth Nonconvex Optimization Problems, A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods, Markov chain block coordinate descent, Randomized primal-dual proximal block coordinate updates, Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems, Sparse low-rank separated representation models for learning from data, A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization, On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization, A Stochastic Proximal Alternating Minimization for Nonsmooth and Nonconvex Optimization, Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems


Uses Software


Cites Work