Pages that link to "Item:Q5408223"
From MaRDI portal
The following pages link to Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming (Q5408223):
Displayed 50 items.
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming (Q263185) (← links)
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models (Q1646566) (← links)
- Stochastic optimization using a trust-region method and random models (Q1646570) (← links)
- On the information-adaptive variants of the ADMM: an iteration complexity perspective (Q1668725) (← links)
- Stochastic heavy ball (Q1697485) (← links)
- Inexact SA method for constrained stochastic convex SDP and application in Chinese stock market (Q1709750) (← links)
- Stochastic learning in multi-agent optimization: communication and payoff-based approaches (Q1716626) (← links)
- Conditional gradient type methods for composite nonlinear and stochastic optimization (Q1717236) (← links)
- Hyperlink regression via Bregman divergence (Q1980417) (← links)
- Support points (Q1991669) (← links)
- Dynamic stochastic approximation for multi-stage stochastic optimization (Q2020613) (← links)
- An accelerated directional derivative method for smooth stochastic convex optimization (Q2029381) (← links)
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems (Q2042418) (← links)
- Gradient convergence of deep learning-based numerical methods for BSDEs (Q2044106) (← links)
- A stochastic subspace approach to gradient-free optimization in high dimensions (Q2044475) (← links)
- Incremental without replacement sampling in nonconvex optimization (Q2046568) (← links)
- A zeroth order method for stochastic weakly convex optimization (Q2057220) (← links)
- Fast incremental expectation maximization for finite-sum optimization: nonasymptotic convergence (Q2058782) (← links)
- A new one-point residual-feedback oracle for black-box learning and control (Q2063773) (← links)
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization (Q2082285) (← links)
- Levenberg-Marquardt method based on probabilistic Jacobian models for nonlinear equations (Q2082542) (← links)
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization (Q2086938) (← links)
- Perturbed iterate SGD for Lipschitz continuous loss functions (Q2093279) (← links)
- An adaptive Polyak heavy-ball method (Q2102380) (← links)
- Distributionally robust optimization with moment ambiguity sets (Q2111170) (← links)
- On stochastic accelerated gradient with convergence rate (Q2111814) (← links)
- A hybrid stochastic optimization framework for composite nonconvex optimization (Q2118109) (← links)
- Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization (Q2125072) (← links)
- Stochastic zeroth-order discretizations of Langevin diffusions for Bayesian inference (Q2137043) (← links)
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization (Q2143221) (← links)
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization (Q2149551) (← links)
- Zeroth-order algorithms for stochastic distributed nonconvex optimization (Q2151863) (← links)
- Zeroth-order methods for noisy Hölder-gradient functions (Q2162695) (← links)
- An alternative to EM for Gaussian mixture models: batch and stochastic Riemannian optimization (Q2188245) (← links)
- Primal-dual optimization algorithms over Riemannian manifolds: an iteration complexity analysis (Q2205985) (← links)
- Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization (Q2209727) (← links)
- Neural network regression for Bermudan option pricing (Q2239248) (← links)
- Smoothed functional-based gradient algorithms for off-policy reinforcement learning: a non-asymptotic viewpoint (Q2242923) (← links)
- Simultaneous inference of periods and period-luminosity relations for Mira variable stars (Q2245144) (← links)
- Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants (Q2282819) (← links)
- Stochastic subgradient method converges on tame functions (Q2291732) (← links)
- Deep relaxation: partial differential equations for optimizing deep neural networks (Q2319762) (← links)
- Misspecified nonconvex statistical optimization for sparse phase retrieval (Q2425184) (← links)
- Minimax efficient finite-difference stochastic gradient estimators using black-box function evaluations (Q2661588) (← links)
- Improved complexities for stochastic conditional gradient methods under interpolation-like conditions (Q2670499) (← links)
- Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization (Q2679567) (← links)
- Stochastic first-order methods for convex and nonconvex functional constrained optimization (Q2689819) (← links)
- Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs (Q2693641) (← links)
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points (Q2696568) (← links)
- Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization (Q2696917) (← links)