An optimal method for stochastic composite optimization
From MaRDI portal
Publication:431018
DOI10.1007/S10107-010-0434-YzbMATH Open1273.90136OpenAlexW2024484010MaRDI QIDQ431018FDOQ431018
Authors: Guanghui Lan
Publication date: 26 June 2012
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-010-0434-y
Recommendations
- Universal method for stochastic composite optimization problems
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- scientific article; zbMATH DE number 1045614
- A smoothing stochastic gradient method for composite optimization
- Stochastic primal dual fixed point method for composite optimization
- A hybrid stochastic optimization framework for composite nonconvex optimization
- scientific article; zbMATH DE number 5281595
- scientific article; zbMATH DE number 4055383
- A stochastic Bregman primal-dual splitting algorithm for composite optimization
Cites Work
- Learning by mirror averaging
- Smooth minimization of non-smooth functions
- Introductory lectures on convex optimization. A basic course.
- Acceleration of Stochastic Approximation by Averaging
- Title not available (Why is that?)
- A Stochastic Approximation Method
- Title not available (Why is that?)
- Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression
- Primal-dual subgradient methods for convex problems
- First-Order Methods for Sparse Covariance Selection
- Robust Stochastic Approximation Approach to Stochastic Programming
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Convex Analysis
- Introduction to Stochastic Search and Optimization
- On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean
- Title not available (Why is that?)
- Smooth Optimization with Approximate Gradient
- Proximal Minimization Methods with Generalized Bregman Functions
- Bregman Monotone Optimization Algorithms
- Non-Euclidean restricted memory level method for large-scale convex optimization
- Iteration-complexity of first-order penalty methods for convex programming
- The empirical behavior of sampling methods for stochastic programming
- The sample average approximation method for stochastic discrete optimization
- Convergence of Proximal-Like Algorithms
- The Existence of Probability Measures with Given Marginals
- Monte Carlo bounding techniques for determinig solution quality in stochastic programs
- The sample average approximation method applied to stochastic routing problems: a computational study
- Smooth Optimization Approach for Sparse Covariance Selection
- Monte Carlo sampling approach to stochastic programming
- stochastic quasigradient methods and their application to system optimization†
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Smoothing technique and its applications in semidefinite optimization
- Title not available (Why is that?)
- On complexity of stochastic programming problems
- Large-scale semidefinite programming via a saddle point mirror-prox algorithm
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- A method of aggregate stochastic subgradients with on-line stepsize rules for convex stochastic programming problems
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
Cited In (only showing first 100 items - show all)
- General convergence analysis of stochastic first-order methods for composite optimization
- A family of subgradient-based methods for convex optimization problems in a unifying framework
- An optimal high-order tensor method for convex optimization
- ASD+M: automatic parameter tuning in stochastic optimization and on-line learning
- First-order methods of smooth convex optimization with inexact oracle
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Algorithms for stochastic optimization with function or expectation constraints
- \(O(1/t)\) complexity analysis of the generalized alternating direction method of multipliers
- Linear coupling: an ultimate unification of gradient and mirror descent
- Communication-efficient algorithms for decentralized and stochastic optimization
- Accelerating Stochastic Composition Optimization
- A mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize
- On stochastic accelerated gradient with convergence rate
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- Conditional gradient algorithms for norm-regularized smooth convex optimization
- Stochastic forward-backward splitting for monotone inclusions
- Subgradient ellipsoid method for nonsmooth convex problems
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- First-order methods for convex optimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Algorithms of robust stochastic optimization based on mirror descent method
- On the solution of stochastic optimization and variational problems in imperfect information regimes
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- A multilevel proximal gradient algorithm for a class of composite optimization problems
- An optimal trade-off model for portfolio selection with sensitivity of parameters
- Approximation algorithms from inexact solutions to semidefinite programming relaxations of combinatorial optimization problems
- A sparsity preserving stochastic gradient methods for sparse regression
- Dynamic stochastic approximation for multi-stage stochastic optimization
- Block stochastic gradient iteration for convex and nonconvex optimization
- An optimal randomized incremental gradient method
- Accelerated first-order primal-dual proximal methods for linearly constrained composite convex programming
- Robust accelerated gradient methods for smooth strongly convex functions
- An accelerated method for derivative-free smooth stochastic convex optimization
- Accelerated stochastic algorithms for convex-concave saddle-point problems
- An accelerated directional derivative method for smooth stochastic convex optimization
- Accelerated stochastic variance reduction for a class of convex optimization problems
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Stochastic block mirror descent methods for nonsmooth and stochastic optimization
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization
- Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity
- Gradient sliding for composite optimization
- New results on subgradient methods for strongly convex optimization problems with a unified analysis
- Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity
- Random gradient extrapolation for distributed and stochastic optimization
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Penalty methods with stochastic approximation for stochastic nonlinear programming
- Universal method for stochastic composite optimization problems
- Inexact proximal stochastic gradient method for convex composite optimization
- Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints
- An efficient primal dual prox method for non-smooth optimization
- Nonconvex optimization with inertial proximal stochastic variance reduction gradient
- Convergence of stochastic proximal gradient algorithm
- Stochastic optimization using a trust-region method and random models
- On the global convergence rate of the gradient descent method for functions with Hölder continuous gradients
- Accelerated schemes for a class of variational inequalities
- On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators
- Bridging the gap between constant step size stochastic gradient descent and Markov chains
- Optimization with reference-based robust preference constraints
- Generalized uniformly optimal methods for nonlinear programming
- The million-variable ``march for stochastic combinatorial optimization
- Stochastic heavy ball
- Scheduled restart momentum for accelerated stochastic gradient descent
- Conditional gradient sliding for convex optimization
- A smoothing stochastic gradient method for composite optimization
- A multi-step doubly stabilized bundle method for nonsmooth convex optimization
- Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- Inexact SA method for constrained stochastic convex SDP and application in Chinese stock market
- Unifying mirror descent and dual averaging
- A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates
- Title not available (Why is that?)
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Momentum-based accelerated mirror descent stochastic approximation for robust topology optimization under stochastic loads
- Optimal Algorithms for Stochastic Complementary Composite Minimization
- Graph-dependent implicit regularisation for distributed stochastic subgradient descent
- On stochastic accelerated gradient with convergence rate of regression learning
- Inexact model: a framework for optimization and variational inequalities
- Semi-discrete optimal transport: hardness, regularization and numerical solution
- Universal intermediate gradient method for convex problems with inexact oracle
- Portfolio selection with the effect of systematic risk diversification: formulation and accelerated gradient algorithm
- Accelerated zero-order SGD method for solving the black box optimization problem under ``overparametrization condition
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- Open problem: Iterative schemes for stochastic optimization: convergence statements and limit theorems
- Average curvature FISTA for nonconvex smooth composite optimization problems
- Data-Driven Mirror Descent with Input-Convex Neural Networks
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems
- Computing the best approximation over the intersection of a polyhedral set and the doubly nonnegative cone
- Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction
- Research on three-step accelerated gradient algorithm in deep learning
- Automatic, dynamic, and nearly optimal learning rate specification via local quadratic approximation
- Utilizing second order information in minibatch stochastic variance reduced proximal iterations
- Stochastic linearized generalized alternating direction method of multipliers: expected convergence rates and large deviation properties
- Block coordinate type methods for optimization and learning
- Two stochastic optimization algorithms for convex optimization with fixed point constraints
- A data efficient and feasible level set method for stochastic convex optimization with expectation constraints
- A single cut proximal bundle method for stochastic convex composite optimization
- Block mirror stochastic gradient method for stochastic optimization
- Recent theoretical advances in decentralized distributed convex optimization
- Adaptive proximal SGD based on new estimating sequences for sparser ERM
Uses Software
This page was built for publication: An optimal method for stochastic composite optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q431018)