Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming

From MaRDI portal
Revision as of 02:01, 9 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5408223

DOI10.1137/120880811zbMath1295.90026arXiv1309.5549OpenAlexW2963470657MaRDI QIDQ5408223

Saeed Ghadimi, Guanghui Lan

Publication date: 9 April 2014

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1309.5549




Related Items (only showing first 100 items - show all)

Learning with Limited Samples: Meta-Learning and Applications to Communication SystemsBlock coordinate type methods for optimization and learningLower bounds for non-convex stochastic optimizationScalable subspace methods for derivative-free nonlinear least-squares optimizationZeroth-order optimization with orthogonal random directionsStochastic gradient descent with noise of machine learning type. I: Discrete time analysisAsynchronous fully-decentralized SGD in the cluster-based modelStochastic momentum methods for non-convex learning without bounded assumptionsZeroth-order algorithms for nonconvex-strongly-concave minimax problems with improved complexitiesA Zeroth-Order Proximal Stochastic Gradient Method for Weakly Convex Stochastic OptimizationSign stochastic gradient descents without bounded gradient assumption for the finite sum minimizationUnified analysis of stochastic gradient methods for composite convex and smooth optimizationStochastic search for a parametric cost function approximation: energy storage with rolling forecastsA unified analysis of stochastic gradient‐free Frank–Wolfe methodsParallel and distributed asynchronous adaptive stochastic gradient methodsScaling up stochastic gradient descent for non-convex optimisationLearning with risks based on M-locationA framework of convergence analysis of mini-batch stochastic projected gradient methodsComplexity analysis of a stochastic variant of generalized alternating direction method of multipliersFederated learning for minimizing nonsmooth convex loss functionsAdaptive step size rules for stochastic optimization in large-scale learningHigh-probability generalization bounds for pointwise uniformly stable algorithmsProximal variable smoothing method for three-composite nonconvex nonsmooth minimization with a linear operatorCommunication-efficient and privacy-preserving large-scale federated learning counteracting heterogeneityByzantine-robust loopless stochastic variance-reduced gradientA modified stochastic quasi-Newton algorithm for summing functions problem in machine learningA Convergence Study of SGD-Type Methods for Stochastic OptimizationA line search based proximal stochastic gradient algorithm with dynamical variance reductionConvergence of gradient algorithms for nonconvex \(C^{1+ \alpha}\) cost functionsStochastic variable metric proximal gradient with variance reduction for non-convex composite optimizationAdaptive sampling quasi-Newton methods for zeroth-order stochastic optimizationStochastic Fixed-Point Iterations for Nonexpansive Maps: Convergence and Error BoundsA Decomposition Algorithm for Two-Stage Stochastic Programs with Nonconvex Recourse FunctionsBayesian Stochastic Gradient Descent for Stochastic Optimization with Streaming Input DataNumerical Analysis for Convergence of a Sample-Wise Backpropagation Method for Training Stochastic Neural NetworksGANs training: A game and stochastic control approachUnifying framework for accelerated randomized methods in convex optimizationStochastic gradient descent: where optimization meets machine learningAlgorithms with gradient clipping for stochastic optimization with heavy-tailed noiseTwo stochastic optimization algorithms for convex optimization with fixed point constraintsRecent theoretical advances in decentralized distributed convex optimizationRecent Theoretical Advances in Non-Convex OptimizationStochastic Quasi-Newton Methods for Nonconvex Stochastic OptimizationA multivariate adaptive gradient algorithm with reduced tuning effortsA semismooth Newton stochastic proximal point algorithm with variance reductionSPIRAL: a superlinearly convergent incremental proximal algorithm for nonconvex finite sum minimizationAccelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock OptimizationA Stochastic Semismooth Newton Method for Nonsmooth Nonconvex OptimizationOn the Convergence of Mirror Descent beyond Stochastic Convex ProgrammingDistributed Stochastic Optimization with Large DelaysStochastic Difference-of-Convex-Functions Algorithms for Nonconvex ProgrammingComplexity of an inexact proximal-point penalty method for constrained smooth non-convex optimizationA fully stochastic second-order trust region methodWeakly-convex–concave min–max optimization: provable algorithms and applications in machine learningStochastic zeroth-order discretizations of Langevin diffusions for Bayesian inferenceDerivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic SystemsA theoretical and empirical comparison of gradient approximations in derivative-free optimizationFinite Difference Gradient Approximation: To Randomize or Not?A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimizationGlobal convergence rate analysis of unconstrained optimization methods based on probabilistic modelsStochastic optimization using a trust-region method and random modelsZeroth-order algorithms for stochastic distributed nonconvex optimizationZeroth-Order Stochastic Compositional Algorithms for Risk-Aware LearningStochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence RatesZeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive SamplingAdaptive Sampling Strategies for Stochastic OptimizationErratum: Swarming for Faster Convergence in Stochastic OptimizationZeroth-order methods for noisy Hölder-gradient functionsUnnamed ItemUnnamed ItemUnnamed ItemAn Accelerated Method for Derivative-Free Smooth Stochastic Convex OptimizationAccelerated Methods for NonConvex OptimizationOn the information-adaptive variants of the ADMM: an iteration complexity perspectiveA Diffusion Approximation Theory of Momentum Stochastic Gradient Descent in Nonconvex OptimizationCoupled Learning Enabled Stochastic Programming with Endogenous UncertaintyMinimax efficient finite-difference stochastic gradient estimators using black-box function evaluationsSwarming for Faster Convergence in Stochastic OptimizationBlock Stochastic Gradient Iteration for Convex and Nonconvex OptimizationImproved complexities for stochastic conditional gradient methods under interpolation-like conditionsScheduled Restart Momentum for Accelerated Stochastic Gradient DescentAn alternative to EM for Gaussian mixture models: batch and stochastic Riemannian optimizationRisk-Sensitive Reinforcement Learning via Policy Gradient SearchMomentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimizationStochastic Block Mirror Descent Methods for Nonsmooth and Stochastic OptimizationUnnamed ItemMisspecified nonconvex statistical optimization for sparse phase retrievalStochastic heavy ballOn Sampling Rates in Simulation-Based RecursionsStochastic first-order methods for convex and nonconvex functional constrained optimizationMultiComposite Nonconvex Optimization for Training Deep Neural NetworksComplexity guarantees for an implicit smoothing-enabled method for stochastic MPECsPenalty methods with stochastic approximation for stochastic nonlinear programmingZeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle pointsDistributed stochastic gradient tracking methods with momentum acceleration for non-convex optimizationPrimal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysisAccelerated stochastic variance reduction for a class of convex optimization problemsInexact SA method for constrained stochastic convex SDP and application in Chinese stock marketParallel sequential Monte Carlo for stochastic gradient-free nonconvex optimizationOptimization-Based Calibration of Simulation Input Models







This page was built for publication: Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming