Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
From MaRDI portal
Publication:2954396
DOI10.1137/130936361zbMath1353.90095arXiv1309.2249OpenAlexW1975768153MaRDI QIDQ2954396
Publication date: 13 January 2017
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1309.2249
nonsmooth optimizationstochastic optimizationblock coordinate descentmetric learningmirror descentstochastic composite optimization
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Stochastic programming (90C15) Stochastic approximation (62L20)
Related Items (30)
Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes ⋮ Block coordinate type methods for optimization and learning ⋮ A fully stochastic second-order trust region method ⋮ On optimal probabilities in stochastic coordinate descent methods ⋮ Distributed constraint-coupled optimization via primal decomposition over random time-varying graphs ⋮ Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization ⋮ Block Policy Mirror Descent ⋮ On the convergence of asynchronous parallel iteration with unbounded delays ⋮ A stochastic variance reduction algorithm with Bregman distances for structured composite problems ⋮ Block mirror stochastic gradient method for stochastic optimization ⋮ Unifying framework for accelerated randomized methods in convex optimization ⋮ Penalty methods with stochastic approximation for stochastic nonlinear programming ⋮ On stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities: randomized block coordinate and optimal averaging schemes ⋮ Optimization-Based Calibration of Simulation Input Models ⋮ Conditional gradient type methods for composite nonlinear and stochastic optimization ⋮ Recent Theoretical Advances in Non-Convex Optimization ⋮ A Method with Convergence Rates for Optimization Problems with Variational Inequality Constraints ⋮ Efficient Search of First-Order Nash Equilibria in Nonconvex-Concave Smooth Min-Max Problems ⋮ Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization ⋮ Point process estimation with Mirror Prox algorithms ⋮ An accelerated directional derivative method for smooth stochastic convex optimization ⋮ A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods ⋮ Fastest rates for stochastic mirror descent methods ⋮ Markov chain block coordinate descent ⋮ Randomized primal-dual proximal block coordinate updates ⋮ Accelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock Optimization ⋮ A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization ⋮ On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization ⋮ Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs ⋮ Accelerated gradient methods for nonconvex nonlinear and stochastic programming
Cites Work
- Subgradient methods for huge-scale optimization problems
- An optimal method for stochastic composite optimization
- Validation analysis of mirror descent stochastic approximation method
- A coordinate gradient descent method for nonsmooth separable minimization
- Introductory lectures on convex optimization. A basic course.
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Iteration-complexity of first-order penalty methods for convex programming
- Smooth minimization of nonsmooth functions with parallel coordinate descent methods
- Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
- Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
- Randomized Methods for Linear Constraints: Convergence Rates and Conditioning
- On the Convergence of a Matrix Splitting Algorithm for the Symmetric Monotone Linear Complementarity Problem
- Robust Stochastic Approximation Approach to Stochastic Programming
- Acceleration of Stochastic Approximation by Averaging
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- On the Convergence of Block Coordinate Descent Type Methods
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- A Stochastic Approximation Method
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization