On stochastic gradient and subgradient methods with adaptive steplength sequences
DOI10.1016/J.AUTOMATICA.2011.09.043zbMATH Open1244.93178arXiv1105.4549OpenAlexW2080335539WikidataQ105583564 ScholiaQ105583564MaRDI QIDQ445032FDOQ445032
Authors: Farzad Yousefian, Angelia Nedić, Uday V. Shanbhag
Publication date: 24 August 2012
Published in: Automatica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1105.4549
Recommendations
- Stochastic approximation with adaptive step sizes for optimization in noisy environment and its application in regression models
- Adaptive stochastic approximation algorithm
- A method of stochastic subgradients with complete feedback stepsize rule for convex stochastic approximation problems
- Publication:3026772
- Ritz-like values in steplength selections for stochastic gradient methods
convex optimizationstochastic approximationstochastic optimizationadaptive steplengthrandomized smoothing techniques
Cites Work
- Acceleration of Stochastic Approximation by Averaging
- A Stochastic Approximation Method
- Convergence rate of incremental subgradient algorithms
- Title not available (Why is that?)
- Robust Stochastic Approximation Approach to Stochastic Programming
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Title not available (Why is that?)
- A smoothing method for mathematical programs with equilibrium constraints
- Introduction to Stochastic Programming
- Gradient Convergence in Gradient methods with Errors
- Stochastic Approximation Approaches to the Stochastic Variational Inequality Problem
- L-Shaped Linear Programs with Applications to Optimal Control and Stochastic Programming
- Solving variational inequalities with stochastic mirror-prox algorithm
- Probabilistic robust design with linear quadratic regulators
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- Stochastic Estimation of the Maximum of a Regression Function
- Distributed stochastic subgradient projection algorithms for convex optimization
- Recourse-based stochastic nonlinear programming: properties and Benders-SQP algorithms
- The O.D.E. Method for Convergence of Stochastic Approximation and Reinforcement Learning
- A Complementarity Framework for Forward Contracting Under Uncertainty
- Decentralized Resource Allocation in Dynamic Networks of Agents
- Smooth SQP Methods for Mathematical Programs with Nonlinear Complementarity Constraints
- Stochastic optimization problems with nondifferentiable cost functionals
- Distributed asynchronous incremental subgradient methods
- Stochastic approximation method with gradient averaging for unconstrained problems
- Incremental stochastic subgradient algorithms for convex optimization
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
Cited In (37)
- A stochastic gradient method for a class of nonlinear PDE-constrained optimal control problems under uncertainty
- Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs
- Technical note: Consistency analysis of sequential learning under approximate Bayesian inference
- Almost sure convergence of the forward-backward-forward splitting algorithm
- Stochastic forward-backward splitting for monotone inclusions
- Title not available (Why is that?)
- Variance-based extragradient methods with line search for stochastic variational inequalities
- A new computational framework for log-concave density estimation
- Convex optimization over fixed point sets of quasi-nonexpansive and nonexpansive mappings in utility-based bandwidth allocation problems with operational constraints
- String-averaging incremental stochastic subgradient algorithms
- Gradient-free federated learning methods with \(l_1\) and \(l_2\)-randomization for non-smooth convex stochastic optimization problems
- Accelerating mini-batch SARAH by step size rules
- An incremental subgradient method on Riemannian manifolds
- Improved variance reduction extragradient method with line search for stochastic variational inequalities
- Subsampled first-order optimization methods with applications in imaging
- The incremental subgradient methods on distributed estimations in-network
- Nonlinear Gradient Mappings and Stochastic Optimization: A General Framework with Applications to Heavy-Tail Noise
- Non-cooperative games with minmax objectives
- Descent direction method with line search for unconstrained optimization in noisy environment
- On stochastic and deterministic quasi-Newton methods for nonstrongly convex optimization: asymptotic convergence and rate analysis
- Stochastic mirror descent method for distributed multi-agent optimization
- A stopping rule for stochastic approximation
- Stochastic approximation for estimating the price of stability in stochastic Nash games
- On smoothing, regularization, and averaging in stochastic approximation methods for stochastic variational inequality problems
- An Improved Unconstrained Approach for Bilevel Optimization
- A stochastic quasi-Newton method for large-scale optimization
- Adaptive stochastic approximation algorithm
- EFIX: exact fixed point methods for distributed optimization
- Stochastic generalized Nash equilibrium seeking under partial-decision information
- Incremental gradient-free method for nonsmooth distributed optimization
- Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs
- A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks
- Convergence properties of stochastic proximal subgradient method in solving a class of composite optimization problems with cardinality regularizer
- Ritz-like values in steplength selections for stochastic gradient methods
- On the computation of equilibria in monotone and potential stochastic hierarchical games
- ASTRO-DF: a class of adaptive sampling trust-region algorithms for derivative-free stochastic optimization
- Perturbed iterate SGD for Lipschitz continuous loss functions
Uses Software
This page was built for publication: On stochastic gradient and subgradient methods with adaptive steplength sequences
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q445032)