On stochastic gradient and subgradient methods with adaptive steplength sequences
From MaRDI portal
Abstract: The performance of standard stochastic approximation implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, in the first part of the paper, we present two adaptive steplength schemes for strongly convex differentiable stochastic optimization problems, equipped with convergence theory. The first scheme, referred to as a recursive steplength stochastic approximation scheme, optimizes the error bounds to derive a rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous iteration and certain problem parameters. This rule is seen to lead to the optimal steplength sequence over a prescribed set of choices. The second scheme, termed as a cascading steplength stochastic approximation scheme, maintains the steplength sequence as a piecewise-constant decreasing function with the reduction in the steplength occurring when a suitable error threshold is met. In the second part of the paper, we allow for nondifferentiable objective and we propose a local smoothing technique that leads to a differentiable approximation of the function. Assuming a uniform distribution on the local randomness, we establish a Lipschitzian property for the gradient of the approximation and prove that the obtained Lipschitz bound grows at a modest rate with problem size. This facilitates the development of an adaptive steplength stochastic approximation framework, which now requires sampling in the product space of the original measure and the artificially introduced distribution. The resulting adaptive steplength schemes are applied to three stochastic optimization problems. We observe that both schemes perform well in practice and display markedly less reliance on user-defined parameters.
Recommendations
- Stochastic approximation with adaptive step sizes for optimization in noisy environment and its application in regression models
- Adaptive stochastic approximation algorithm
- A method of stochastic subgradients with complete feedback stepsize rule for convex stochastic approximation problems
- Publication:3026772
- Ritz-like values in steplength selections for stochastic gradient methods
Cites work
- scientific article; zbMATH DE number 5348356 (Why is no real title available?)
- scientific article; zbMATH DE number 4091201 (Why is no real title available?)
- scientific article; zbMATH DE number 3706243 (Why is no real title available?)
- scientific article; zbMATH DE number 2121076 (Why is no real title available?)
- scientific article; zbMATH DE number 3349126 (Why is no real title available?)
- L-Shaped Linear Programs with Applications to Optimal Control and Stochastic Programming
- A Complementarity Framework for Forward Contracting Under Uncertainty
- A Stochastic Approximation Method
- A smoothing method for mathematical programs with equilibrium constraints
- Acceleration of Stochastic Approximation by Averaging
- Convergence rate of incremental subgradient algorithms
- Decentralized Resource Allocation in Dynamic Networks of Agents
- Distributed asynchronous incremental subgradient methods
- Distributed stochastic subgradient projection algorithms for convex optimization
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Gradient Convergence in Gradient methods with Errors
- Incremental stochastic subgradient algorithms for convex optimization
- Introduction to Stochastic Programming
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- Probabilistic robust design with linear quadratic regulators
- Recourse-based stochastic nonlinear programming: properties and Benders-SQP algorithms
- Robust Stochastic Approximation Approach to Stochastic Programming
- Smooth SQP Methods for Mathematical Programs with Nonlinear Complementarity Constraints
- Solving variational inequalities with stochastic mirror-prox algorithm
- Stochastic Approximation Approaches to the Stochastic Variational Inequality Problem
- Stochastic Estimation of the Maximum of a Regression Function
- Stochastic approximation method with gradient averaging for unconstrained problems
- Stochastic optimization problems with nondifferentiable cost functionals
- The O.D.E. Method for Convergence of Stochastic Approximation and Reinforcement Learning
Cited in
(37)- Perturbed iterate SGD for Lipschitz continuous loss functions
- A new computational framework for log-concave density estimation
- Technical note: Consistency analysis of sequential learning under approximate Bayesian inference
- Variance-based extragradient methods with line search for stochastic variational inequalities
- Descent direction method with line search for unconstrained optimization in noisy environment
- Almost sure convergence of the forward-backward-forward splitting algorithm
- A stochastic gradient method for a class of nonlinear PDE-constrained optimal control problems under uncertainty
- Adaptive stochastic approximation algorithm
- Stochastic forward-backward splitting for monotone inclusions
- Improved variance reduction extragradient method with line search for stochastic variational inequalities
- Convex optimization over fixed point sets of quasi-nonexpansive and nonexpansive mappings in utility-based bandwidth allocation problems with operational constraints
- String-averaging incremental stochastic subgradient algorithms
- On the computation of equilibria in monotone and potential stochastic hierarchical games
- EFIX: exact fixed point methods for distributed optimization
- scientific article; zbMATH DE number 7306906 (Why is no real title available?)
- A stopping rule for stochastic approximation
- Subsampled first-order optimization methods with applications in imaging
- Non-cooperative games with minmax objectives
- A stochastic quasi-Newton method for large-scale optimization
- On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems
- Convergence properties of stochastic proximal subgradient method in solving a class of composite optimization problems with cardinality regularizer
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- The incremental subgradient methods on distributed estimations in-network
- Stochastic generalized Nash equilibrium seeking under partial-decision information
- Stochastic approximation for estimating the price of stability in stochastic Nash games
- Accelerating mini-batch SARAH by step size rules
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization
- Ritz-like values in steplength selections for stochastic gradient methods
- Nonlinear Gradient Mappings and Stochastic Optimization: A General Framework with Applications to Heavy-Tail Noise
- An Improved Unconstrained Approach for Bilevel Optimization
- Stochastic mirror descent method for distributed multi-agent optimization
- A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks
- On stochastic and deterministic quasi-Newton methods for nonstrongly convex optimization: asymptotic convergence and rate analysis
- An incremental subgradient method on Riemannian manifolds
- Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs
- Incremental gradient-free method for nonsmooth distributed optimization
This page was built for publication: On stochastic gradient and subgradient methods with adaptive steplength sequences
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q445032)