Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
From MaRDI portal
Publication:5408223
DOI10.1137/120880811zbMath1295.90026arXiv1309.5549OpenAlexW2963470657MaRDI QIDQ5408223
Publication date: 9 April 2014
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1309.5549
Analysis of algorithms and problem complexity (68Q25) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15) Stochastic approximation (62L20)
Related Items (only showing first 100 items - show all)
Learning with Limited Samples: Meta-Learning and Applications to Communication Systems ⋮ Block coordinate type methods for optimization and learning ⋮ Lower bounds for non-convex stochastic optimization ⋮ Scalable subspace methods for derivative-free nonlinear least-squares optimization ⋮ Zeroth-order optimization with orthogonal random directions ⋮ Stochastic gradient descent with noise of machine learning type. I: Discrete time analysis ⋮ Asynchronous fully-decentralized SGD in the cluster-based model ⋮ Stochastic momentum methods for non-convex learning without bounded assumptions ⋮ Zeroth-order algorithms for nonconvex-strongly-concave minimax problems with improved complexities ⋮ A Zeroth-Order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization ⋮ Sign stochastic gradient descents without bounded gradient assumption for the finite sum minimization ⋮ Unified analysis of stochastic gradient methods for composite convex and smooth optimization ⋮ Stochastic search for a parametric cost function approximation: energy storage with rolling forecasts ⋮ A unified analysis of stochastic gradient‐free Frank–Wolfe methods ⋮ Parallel and distributed asynchronous adaptive stochastic gradient methods ⋮ Scaling up stochastic gradient descent for non-convex optimisation ⋮ Learning with risks based on M-location ⋮ A framework of convergence analysis of mini-batch stochastic projected gradient methods ⋮ Complexity analysis of a stochastic variant of generalized alternating direction method of multipliers ⋮ Federated learning for minimizing nonsmooth convex loss functions ⋮ Adaptive step size rules for stochastic optimization in large-scale learning ⋮ High-probability generalization bounds for pointwise uniformly stable algorithms ⋮ Proximal variable smoothing method for three-composite nonconvex nonsmooth minimization with a linear operator ⋮ Communication-efficient and privacy-preserving large-scale federated learning counteracting heterogeneity ⋮ Byzantine-robust loopless stochastic variance-reduced gradient ⋮ A modified stochastic quasi-Newton algorithm for summing functions problem in machine learning ⋮ A Convergence Study of SGD-Type Methods for Stochastic Optimization ⋮ A line search based proximal stochastic gradient algorithm with dynamical variance reduction ⋮ Convergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functions ⋮ Stochastic variable metric proximal gradient with variance reduction for non-convex composite optimization ⋮ Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization ⋮ Stochastic Fixed-Point Iterations for Nonexpansive Maps: Convergence and Error Bounds ⋮ A Decomposition Algorithm for Two-Stage Stochastic Programs with Nonconvex Recourse Functions ⋮ Bayesian Stochastic Gradient Descent for Stochastic Optimization with Streaming Input Data ⋮ Numerical Analysis for Convergence of a Sample-Wise Backpropagation Method for Training Stochastic Neural Networks ⋮ GANs training: A game and stochastic control approach ⋮ Unifying framework for accelerated randomized methods in convex optimization ⋮ Stochastic gradient descent: where optimization meets machine learning ⋮ Algorithms with gradient clipping for stochastic optimization with heavy-tailed noise ⋮ Two stochastic optimization algorithms for convex optimization with fixed point constraints ⋮ Recent theoretical advances in decentralized distributed convex optimization ⋮ Recent Theoretical Advances in Non-Convex Optimization ⋮ Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization ⋮ A multivariate adaptive gradient algorithm with reduced tuning efforts ⋮ A semismooth Newton stochastic proximal point algorithm with variance reduction ⋮ SPIRAL: a superlinearly convergent incremental proximal algorithm for nonconvex finite sum minimization ⋮ Accelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock Optimization ⋮ A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization ⋮ On the Convergence of Mirror Descent beyond Stochastic Convex Programming ⋮ Distributed Stochastic Optimization with Large Delays ⋮ Stochastic Difference-of-Convex-Functions Algorithms for Nonconvex Programming ⋮ Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization ⋮ A fully stochastic second-order trust region method ⋮ Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning ⋮ Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference ⋮ Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems ⋮ A theoretical and empirical comparison of gradient approximations in derivative-free optimization ⋮ Finite Difference Gradient Approximation: To Randomize or Not? ⋮ A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization ⋮ Global convergence rate analysis of unconstrained optimization methods based on probabilistic models ⋮ Stochastic optimization using a trust-region method and random models ⋮ Zeroth-order algorithms for stochastic distributed nonconvex optimization ⋮ Zeroth-Order Stochastic Compositional Algorithms for Risk-Aware Learning ⋮ Stochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence Rates ⋮ Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling ⋮ Adaptive Sampling Strategies for Stochastic Optimization ⋮ Erratum: Swarming for Faster Convergence in Stochastic Optimization ⋮ Zeroth-order methods for noisy Hölder-gradient functions ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Unnamed Item ⋮ An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization ⋮ Accelerated Methods for NonConvex Optimization ⋮ On the information-adaptive variants of the ADMM: an iteration complexity perspective ⋮ A Diffusion Approximation Theory of Momentum Stochastic Gradient Descent in Nonconvex Optimization ⋮ Coupled Learning Enabled Stochastic Programming with Endogenous Uncertainty ⋮ Minimax efficient finite-difference stochastic gradient estimators using black-box function evaluations ⋮ Swarming for Faster Convergence in Stochastic Optimization ⋮ Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization ⋮ Improved complexities for stochastic conditional gradient methods under interpolation-like conditions ⋮ Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent ⋮ An alternative to EM for Gaussian mixture models: batch and stochastic Riemannian optimization ⋮ Risk-Sensitive Reinforcement Learning via Policy Gradient Search ⋮ Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization ⋮ Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization ⋮ Unnamed Item ⋮ Misspecified nonconvex statistical optimization for sparse phase retrieval ⋮ Stochastic heavy ball ⋮ On Sampling Rates in Simulation-Based Recursions ⋮ Stochastic first-order methods for convex and nonconvex functional constrained optimization ⋮ MultiComposite Nonconvex Optimization for Training Deep Neural Networks ⋮ Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs ⋮ Penalty methods with stochastic approximation for stochastic nonlinear programming ⋮ Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points ⋮ Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization ⋮ Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis ⋮ Accelerated stochastic variance reduction for a class of convex optimization problems ⋮ Inexact SA method for constrained stochastic convex SDP and application in Chinese stock market ⋮ Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization ⋮ Optimization-Based Calibration of Simulation Input Models
This page was built for publication: Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming