Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
From MaRDI portal
Publication:5408223
DOI10.1137/120880811zbMath1295.90026arXiv1309.5549OpenAlexW2963470657MaRDI QIDQ5408223
Publication date: 9 April 2014
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1309.5549
Analysis of algorithms and problem complexity (68Q25) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15) Stochastic approximation (62L20)
Related Items (only showing first 100 items - show all)
Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization ⋮ A fully stochastic second-order trust region method ⋮ Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning ⋮ Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference ⋮ Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems ⋮ A theoretical and empirical comparison of gradient approximations in derivative-free optimization ⋮ Finite Difference Gradient Approximation: To Randomize or Not? ⋮ A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization ⋮ Global convergence rate analysis of unconstrained optimization methods based on probabilistic models ⋮ Stochastic optimization using a trust-region method and random models ⋮ Zeroth-order algorithms for stochastic distributed nonconvex optimization ⋮ Zeroth-Order Stochastic Compositional Algorithms for Risk-Aware Learning ⋮ Stochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence Rates ⋮ Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling ⋮ Adaptive Sampling Strategies for Stochastic Optimization ⋮ Erratum: Swarming for Faster Convergence in Stochastic Optimization ⋮ Zeroth-order methods for noisy Hölder-gradient functions ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Unnamed Item ⋮ An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization ⋮ Accelerated Methods for NonConvex Optimization ⋮ On the information-adaptive variants of the ADMM: an iteration complexity perspective ⋮ A Diffusion Approximation Theory of Momentum Stochastic Gradient Descent in Nonconvex Optimization ⋮ Coupled Learning Enabled Stochastic Programming with Endogenous Uncertainty ⋮ Minimax efficient finite-difference stochastic gradient estimators using black-box function evaluations ⋮ Swarming for Faster Convergence in Stochastic Optimization ⋮ Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization ⋮ Improved complexities for stochastic conditional gradient methods under interpolation-like conditions ⋮ Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent ⋮ An alternative to EM for Gaussian mixture models: batch and stochastic Riemannian optimization ⋮ Risk-Sensitive Reinforcement Learning via Policy Gradient Search ⋮ Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization ⋮ Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization ⋮ Unnamed Item ⋮ Misspecified nonconvex statistical optimization for sparse phase retrieval ⋮ Stochastic heavy ball ⋮ On Sampling Rates in Simulation-Based Recursions ⋮ Stochastic first-order methods for convex and nonconvex functional constrained optimization ⋮ MultiComposite Nonconvex Optimization for Training Deep Neural Networks ⋮ Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs ⋮ Penalty methods with stochastic approximation for stochastic nonlinear programming ⋮ Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points ⋮ Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization ⋮ Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis ⋮ Accelerated stochastic variance reduction for a class of convex optimization problems ⋮ Inexact SA method for constrained stochastic convex SDP and application in Chinese stock market ⋮ Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization ⋮ Optimization-Based Calibration of Simulation Input Models ⋮ Stochastic learning in multi-agent optimization: communication and payoff-based approaches ⋮ Conditional gradient type methods for composite nonlinear and stochastic optimization ⋮ Stochastic Model-Based Minimization of Weakly Convex Functions ⋮ Multilevel Stochastic Gradient Methods for Nested Composition Optimization ⋮ Recent Advances in Stochastic Riemannian Optimization ⋮ Hyperlink regression via Bregman divergence ⋮ Neural network regression for Bermudan option pricing ⋮ Smoothed functional-based gradient algorithms for off-policy reinforcement learning: a non-asymptotic viewpoint ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Simultaneous inference of periods and period-luminosity relations for Mira variable stars ⋮ Support points ⋮ Variational Representations and Neural Network Estimation of Rényi Divergences ⋮ Dynamic stochastic approximation for multi-stage stochastic optimization ⋮ An accelerated directional derivative method for smooth stochastic convex optimization ⋮ Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants ⋮ First- and Second-Order Methods for Online Convolutional Dictionary Learning ⋮ Stochastic subgradient method converges on tame functions ⋮ Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems ⋮ Stochastic polynomial optimization ⋮ Gradient convergence of deep learning-based numerical methods for BSDEs ⋮ A stochastic subspace approach to gradient-free optimization in high dimensions ⋮ Incremental without replacement sampling in nonconvex optimization ⋮ A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization ⋮ A robust multi-batch L-BFGS method for machine learning ⋮ A zeroth order method for stochastic weakly convex optimization ⋮ Fast incremental expectation maximization for finite-sum optimization: nonasymptotic convergence ⋮ A new one-point residual-feedback oracle for black-box learning and control ⋮ Derivative-free optimization methods ⋮ Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems ⋮ Distributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying Networks ⋮ Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity ⋮ MultiLevel Composite Stochastic Optimization via Nested Variance Reduction ⋮ Deep relaxation: partial differential equations for optimizing deep neural networks ⋮ A Stochastic Subgradient Method for Nonsmooth Nonconvex Multilevel Composition Optimization ⋮ Neural ODEs as the deep limit of ResNets with constant weights ⋮ On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization ⋮ Levenberg-Marquardt method based on probabilistic Jacobian models for nonlinear equations ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization ⋮ Perturbed iterate SGD for Lipschitz continuous loss functions ⋮ Unnamed Item ⋮ Unnamed Item ⋮ An adaptive Polyak heavy-ball method ⋮ Distributionally robust optimization with moment ambiguity sets ⋮ On stochastic accelerated gradient with convergence rate ⋮ Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction ⋮ A hybrid stochastic optimization framework for composite nonconvex optimization ⋮ Accelerated gradient methods for nonconvex nonlinear and stochastic programming
This page was built for publication: Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming