Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
From MaRDI portal
Publication:5962719
Abstract: This paper considers a class of constrained stochastic composite optimization problems whose objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a certain non-differentiable (but convex) component. In order to solve these problems, we propose a randomized stochastic projected gradient (RSPG) algorithm, in which proper mini-batch of samples are taken at each iteration depending on the total budget of stochastic samples allowed. The RSPG algorithm also employs a general distance function to allow taking advantage of the geometry of the feasible region. Complexity of this algorithm is established in a unified setting, which shows nearly optimal complexity of the algorithm for convex stochastic programming. A post-optimization phase is also proposed to significantly reduce the variance of the solutions returned by the algorithm. In addition, based on the RSPG algorithm, a stochastic gradient free algorithm, which only uses the stochastic zeroth-order information, has been also discussed. Some preliminary numerical results are also provided.
Recommendations
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- A hybrid stochastic optimization framework for composite nonconvex optimization
- A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization
- Random gradient-free minimization of convex functions
- Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization
Cites work
- scientific article; zbMATH DE number 439951 (Why is no real title available?)
- scientific article; zbMATH DE number 3790208 (Why is no real title available?)
- scientific article; zbMATH DE number 50675 (Why is no real title available?)
- scientific article; zbMATH DE number 3296905 (Why is no real title available?)
- A Stochastic Approximation Method
- A Unified View of the IPA, SF, and LR Gradient Estimation Techniques
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Acceleration of Stochastic Approximation by Averaging
- An optimal method for stochastic composite optimization
- Bregman Monotone Optimization Algorithms
- Convergence of Proximal-Like Algorithms
- Erratum to: ``Minimizing finite sums with the stochastic average gradient
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Introduction to Stochastic Search and Optimization
- Introductory lectures on convex optimization. A basic course.
- On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems
- Online convex optimization in the bandit setting: gradient descent without a gradient
- Online learning for matrix factorization and sparse coding
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization. II: Shrinking procedures and optimal algorithms
- Optimization for simulation: theory vs. practice
- Optimization techniques for semi-supervised support vector machines
- Random gradient-free minimization of convex functions
- Randomized smoothing for stochastic optimization
- Robust Stochastic Approximation Approach to Stochastic Programming
- The ordered subsets mirror descent optimization method with applications to tomography
- Validation analysis of mirror descent stochastic approximation method
- Variational Analysis
Cited in
(only showing first 100 items - show all)- Dissipative imitation learning for discrete dynamic output feedback control with sparse data sets
- On inexact stochastic splitting methods for a class of nonconvex composite optimization problems with relative error
- Robust High-Dimensional Regression with Coefficient Thresholding and Its Application to Imaging Data Analysis
- scientific article; zbMATH DE number 7705675 (Why is no real title available?)
- Momentum-based accelerated mirror descent stochastic approximation for robust topology optimization under stochastic loads
- On stochastic accelerated gradient with convergence rate of regression learning
- A unified analysis of stochastic gradient‐free Frank–Wolfe methods
- Zeroth-order stochastic compositional algorithms for risk-aware learning
- SPIRAL: a superlinearly convergent incremental proximal algorithm for nonconvex finite sum minimization
- Graphical convergence of subgradients in nonconvex optimization and learning
- An accelerated stochastic mirror descent method
- A semismooth Newton stochastic proximal point algorithm with variance reduction
- Almost sure convergence rates of stochastic proximal gradient descent algorithm
- Random-reshuffled SARAH does not need full gradient computations
- Complexity of a projected Newton-CG method for optimization with bounds
- Stochastic linearized generalized alternating direction method of multipliers: expected convergence rates and large deviation properties
- Mini-batch stochastic subgradient for functional constrained optimization
- Stochastic gradient methods with preconditioned updates
- scientific article; zbMATH DE number 7625189 (Why is no real title available?)
- Stochastic composition optimization of functions without Lipschitz continuous gradient
- Two stochastic optimization algorithms for convex optimization with fixed point constraints
- Stochastic proximal gradient method FOR \(\ell_1\) regularized optimization over a sphere
- Proximal stochastic recursive momentum algorithm for nonsmooth nonconvex optimization problems
- Gradient complexity and non-stationary views of differentially private empirical risk minimization
- Stochastic augmented Lagrangian method in Riemannian shape manifolds
- Unifying framework for accelerated randomized methods in convex optimization
- A single cut proximal bundle method for stochastic convex composite optimization
- Proximal variable smoothing method for three-composite nonconvex nonsmooth minimization with a linear operator
- Worst-case complexity of an SQP method for nonlinear equality constrained stochastic optimization
- Variable sample-size operator extrapolation algorithm for stochastic mixed variational inequalities
- A framework of convergence analysis of mini-batch stochastic projected gradient methods
- Open problem: Iterative schemes for stochastic optimization: convergence statements and limit theorems
- Accelerating stochastic sequential quadratic programming for equality constrained optimization using predictive variance reduction
- A stochastic Bregman golden ratio algorithm for non-Lipschitz stochastic mixed variational inequalities with application to resource share problems
- Convergence analysis of a subsampled Levenberg-Marquardt algorithm
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems
- Convergence rates for the stochastic gradient descent method for non-convex objective functions
- Block coordinate type methods for optimization and learning
- Hölderian Error Bounds and Kurdyka-Łojasiewicz Inequality for the Trust Region Subproblem
- On the information-adaptive variants of the ADMM: an iteration complexity perspective
- A stochastic approximation method for approximating the efficient frontier of chance-constrained nonlinear programs
- Asynchronous schemes for stochastic and misspecified potential games and nonconvex optimization
- Variance reduction on general adaptive stochastic mirror descent
- Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes
- Inexact proximal stochastic second-order methods for nonconvex composite optimization
- An interior stochastic gradient method for a class of non-Lipschitz optimization problems
- Stochastic difference-of-convex-functions algorithms for nonconvex programming
- Proximally guided stochastic subgradient method for nonsmooth, nonconvex problems
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Zeroth-order algorithms for stochastic distributed nonconvex optimization
- Stochastic proximal gradient methods for nonconvex problems in Hilbert spaces
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- A unified adaptive tensor approximation scheme to accelerate composite convex optimization
- Stochastic conditional gradient++: (Non)convex minimization and continuous submodular maximization
- Mini-batch learning of exponential family finite mixture models
- Stochastic nested variance reduction for nonconvex optimization
- On the computation of equilibria in monotone and potential stochastic hierarchical games
- Conditional gradient sliding for convex optimization
- Stopping criteria for, and strong convergence of, stochastic gradient descent on Bottou-Curtis-Nocedal functions
- Stochastic trust-region methods with trust-region radius depending on probabilistic models
- scientific article; zbMATH DE number 7255141 (Why is no real title available?)
- An accelerated directional derivative method for smooth stochastic convex optimization
- Robust and sparse regression in generalized linear model by stochastic optimization
- A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods
- Distributed variable sample-size gradient-response and best-response schemes for stochastic Nash equilibrium problems
- Dynamic stochastic approximation for multi-stage stochastic optimization
- Stochastic relaxed inertial forward-backward-forward splitting for monotone inclusions in Hilbert spaces
- On stochastic accelerated gradient with convergence rate
- Accelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock Optimization
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- Stochastic model-based minimization of weakly convex functions
- Stochastic block mirror descent methods for nonsmooth and stochastic optimization
- Stochastic approximation methods for the two-stage stochastic linear complementarity problem
- A hybrid stochastic optimization framework for composite nonconvex optimization
- Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization
- Stochastic variable metric proximal gradient with variance reduction for non-convex composite optimization
- Penalty methods with stochastic approximation for stochastic nonlinear programming
- Recent Theoretical Advances in Non-Convex Optimization
- Trimmed statistical estimation via variance reduction
- Understanding generalization error of SGD in nonconvex optimization
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- scientific article; zbMATH DE number 7370632 (Why is no real title available?)
- scientific article; zbMATH DE number 7255155 (Why is no real title available?)
- Generalized uniformly optimal methods for nonlinear programming
- Complexity analysis of a stochastic variant of generalized alternating direction method of multipliers
- Numerical solution of inverse problems by weak adversarial networks
- An accelerated method for derivative-free smooth stochastic convex optimization
- A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis
- Stochastic polynomial optimization
- An Asynchronous Mini-Batch Algorithm for Regularized Stochastic Optimization
- Optimization-based calibration of simulation input models
- Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs
- Stochastic gradient descent with noise of machine learning type. I: Discrete time analysis
- Block stochastic gradient iteration for convex and nonconvex optimization
This page was built for publication: Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5962719)