An optimal method for stochastic composite optimization

From MaRDI portal
Publication:431018

DOI10.1007/s10107-010-0434-yzbMath1273.90136OpenAlexW2024484010MaRDI QIDQ431018

Guanghui Lan

Publication date: 26 June 2012

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10107-010-0434-y



Related Items

Research on three-step accelerated gradient algorithm in deep learning, A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates, Block coordinate type methods for optimization and learning, Stochastic forward-backward splitting for monotone inclusions, An optimal trade-off model for portfolio selection with sensitivity of parameters, Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent, A Smoothing Direct Search Method for Monte Carlo-Based Bound Constrained Composite Nonsmooth Optimization, Gradient sliding for composite optimization, On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients, New results on subgradient methods for strongly convex optimization problems with a unified analysis, Stochastic optimization using a trust-region method and random models, A smoothing stochastic gradient method for composite optimization, Accelerated Extra-Gradient Descent: A Novel Accelerated First-Order Method, Unnamed Item, An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization, On the information-adaptive variants of the ADMM: an iteration complexity perspective, Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems, A multi-step doubly stabilized bundle method for nonsmooth convex optimization, ASD+M: automatic parameter tuning in stochastic optimization and on-line learning, Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization, Algorithms for stochastic optimization with function or expectation constraints, Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent, Accelerated schemes for a class of variational inequalities, Optimization with Reference-Based Robust Preference Constraints, Subgradient ellipsoid method for nonsmooth convex problems, Unifying mirror descent and dual averaging, Semi-discrete optimal transport: hardness, regularization and numerical solution, A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems, Nonconvex optimization with inertial proximal stochastic variance reduction gradient, Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems, Automatic, dynamic, and nearly optimal learning rate specification via local quadratic approximation, First-order methods of smooth convex optimization with inexact oracle, Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization, An overview of stochastic quasi-Newton methods for large-scale machine learning, A mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize, Complexity analysis of a stochastic variant of generalized alternating direction method of multipliers, \(O(1/t)\) complexity analysis of the generalized alternating direction method of multipliers, Inexact proximal stochastic gradient method for convex composite optimization, Optimal Algorithms for Stochastic Complementary Composite Minimization, Bridging the gap between constant step size stochastic gradient descent and Markov chains, Block mirror stochastic gradient method for stochastic optimization, Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction, Stochastic heavy ball, First-order methods for convex optimization, Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning, Data-Driven Mirror Descent with Input-Convex Neural Networks, On the Adaptivity of Stochastic Gradient-Based Optimization, Adaptive proximal SGD based on new estimating sequences for sparser ERM, Penalty methods with stochastic approximation for stochastic nonlinear programming, Accelerated stochastic variance reduction for a class of convex optimization problems, Inexact SA method for constrained stochastic convex SDP and application in Chinese stock market, A Multilevel Proximal Gradient Algorithm for a Class of Composite Optimization Problems, A sparsity preserving stochastic gradient methods for sparse regression, Two stochastic optimization algorithms for convex optimization with fixed point constraints, Recent theoretical advances in decentralized distributed convex optimization, Portfolio selection with the effect of systematic risk diversification: formulation and accelerated gradient algorithm, Conditional gradient algorithms for norm-regularized smooth convex optimization, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints, Approximation algorithms from inexact solutions to semidefinite programming relaxations of combinatorial optimization problems, Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization, Convergence of stochastic proximal gradient algorithm, Dynamic stochastic approximation for multi-stage stochastic optimization, Unnamed Item, An optimal randomized incremental gradient method, An accelerated directional derivative method for smooth stochastic convex optimization, Random Gradient Extrapolation for Distributed and Stochastic Optimization, Unnamed Item, General convergence analysis of stochastic first-order methods for composite optimization, Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization, Algorithms of robust stochastic optimization based on mirror descent method, Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization, A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds, Communication-efficient algorithms for decentralized and stochastic optimization, Conditional Gradient Sliding for Convex Optimization, Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming, Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity, A family of subgradient-based methods for convex optimization problems in a unifying framework, Generalized uniformly optimal methods for nonlinear programming, Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity, On the Solution of Stochastic Optimization and Variational Problems in Imperfect Information Regimes, Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression, Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent, Computing the Best Approximation over the Intersection of a Polyhedral Set and the Doubly Nonnegative Cone, Unnamed Item, Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions, Unnamed Item, An efficient primal dual prox method for non-smooth optimization, On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators, A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization, Inexact model: a framework for optimization and variational inequalities, Universal intermediate gradient method for convex problems with inexact oracle, Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs, On stochastic accelerated gradient with convergence rate, An Optimal High-Order Tensor Method for Convex Optimization, Accelerated gradient methods for nonconvex nonlinear and stochastic programming


Uses Software


Cites Work