Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming

From MaRDI portal
Revision as of 02:01, 9 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5408223

DOI10.1137/120880811zbMath1295.90026arXiv1309.5549OpenAlexW2963470657MaRDI QIDQ5408223

Saeed Ghadimi, Guanghui Lan

Publication date: 9 April 2014

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1309.5549




Related Items (only showing first 100 items - show all)

Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimizationA fully stochastic second-order trust region methodWeakly-convex–concave min–max optimization: provable algorithms and applications in machine learningStochastic zeroth-order discretizations of Langevin diffusions for Bayesian inferenceDerivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic SystemsA theoretical and empirical comparison of gradient approximations in derivative-free optimizationFinite Difference Gradient Approximation: To Randomize or Not?A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimizationGlobal convergence rate analysis of unconstrained optimization methods based on probabilistic modelsStochastic optimization using a trust-region method and random modelsZeroth-order algorithms for stochastic distributed nonconvex optimizationZeroth-Order Stochastic Compositional Algorithms for Risk-Aware LearningStochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence RatesZeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive SamplingAdaptive Sampling Strategies for Stochastic OptimizationErratum: Swarming for Faster Convergence in Stochastic OptimizationZeroth-order methods for noisy Hölder-gradient functionsUnnamed ItemUnnamed ItemUnnamed ItemAn Accelerated Method for Derivative-Free Smooth Stochastic Convex OptimizationAccelerated Methods for NonConvex OptimizationOn the information-adaptive variants of the ADMM: an iteration complexity perspectiveA Diffusion Approximation Theory of Momentum Stochastic Gradient Descent in Nonconvex OptimizationCoupled Learning Enabled Stochastic Programming with Endogenous UncertaintyMinimax efficient finite-difference stochastic gradient estimators using black-box function evaluationsSwarming for Faster Convergence in Stochastic OptimizationBlock Stochastic Gradient Iteration for Convex and Nonconvex OptimizationImproved complexities for stochastic conditional gradient methods under interpolation-like conditionsScheduled Restart Momentum for Accelerated Stochastic Gradient DescentAn alternative to EM for Gaussian mixture models: batch and stochastic Riemannian optimizationRisk-Sensitive Reinforcement Learning via Policy Gradient SearchMomentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimizationStochastic Block Mirror Descent Methods for Nonsmooth and Stochastic OptimizationUnnamed ItemMisspecified nonconvex statistical optimization for sparse phase retrievalStochastic heavy ballOn Sampling Rates in Simulation-Based RecursionsStochastic first-order methods for convex and nonconvex functional constrained optimizationMultiComposite Nonconvex Optimization for Training Deep Neural NetworksComplexity guarantees for an implicit smoothing-enabled method for stochastic MPECsPenalty methods with stochastic approximation for stochastic nonlinear programmingZeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle pointsDistributed stochastic gradient tracking methods with momentum acceleration for non-convex optimizationPrimal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysisAccelerated stochastic variance reduction for a class of convex optimization problemsInexact SA method for constrained stochastic convex SDP and application in Chinese stock marketParallel sequential Monte Carlo for stochastic gradient-free nonconvex optimizationOptimization-Based Calibration of Simulation Input ModelsStochastic learning in multi-agent optimization: communication and payoff-based approachesConditional gradient type methods for composite nonlinear and stochastic optimizationStochastic Model-Based Minimization of Weakly Convex FunctionsMultilevel Stochastic Gradient Methods for Nested Composition OptimizationRecent Advances in Stochastic Riemannian OptimizationHyperlink regression via Bregman divergenceNeural network regression for Bermudan option pricingSmoothed functional-based gradient algorithms for off-policy reinforcement learning: a non-asymptotic viewpointUnnamed ItemUnnamed ItemUnnamed ItemSimultaneous inference of periods and period-luminosity relations for Mira variable starsSupport pointsVariational Representations and Neural Network Estimation of Rényi DivergencesDynamic stochastic approximation for multi-stage stochastic optimizationAn accelerated directional derivative method for smooth stochastic convex optimizationOptimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variantsFirst- and Second-Order Methods for Online Convolutional Dictionary LearningStochastic subgradient method converges on tame functionsDecentralized and parallel primal and dual accelerated methods for stochastic convex programming problemsStochastic polynomial optimizationGradient convergence of deep learning-based numerical methods for BSDEsA stochastic subspace approach to gradient-free optimization in high dimensionsIncremental without replacement sampling in nonconvex optimizationA Single Timescale Stochastic Approximation Method for Nested Stochastic OptimizationA robust multi-batch L-BFGS method for machine learningA zeroth order method for stochastic weakly convex optimizationFast incremental expectation maximization for finite-sum optimization: nonasymptotic convergenceA new one-point residual-feedback oracle for black-box learning and controlDerivative-free optimization methodsProximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex ProblemsDistributed Subgradient-Free Stochastic Optimization Algorithm for Nonsmooth Convex Functions over Time-Varying NetworksStochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and AdaptivityMultiLevel Composite Stochastic Optimization via Nested Variance ReductionDeep relaxation: partial differential equations for optimizing deep neural networksA Stochastic Subgradient Method for Nonsmooth Nonconvex Multilevel Composition OptimizationNeural ODEs as the deep limit of ResNets with constant weightsOn the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimizationLevenberg-Marquardt method based on probabilistic Jacobian models for nonlinear equationsUnnamed ItemUnnamed ItemVariable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimizationPerturbed iterate SGD for Lipschitz continuous loss functionsUnnamed ItemUnnamed ItemAn adaptive Polyak heavy-ball methodDistributionally robust optimization with moment ambiguity setsOn stochastic accelerated gradient with convergence rateFast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance ReductionA hybrid stochastic optimization framework for composite nonconvex optimizationAccelerated gradient methods for nonconvex nonlinear and stochastic programming






This page was built for publication: Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming