The following pages link to (Q2752037):
Displayed 36 items.
- A Stochastic Quasi-Newton Method for Large-Scale Optimization (Q121136) (← links)
- New results on subgradient methods for strongly convex optimization problems with a unified analysis (Q316174) (← links)
- On stochastic gradient and subgradient methods with adaptive steplength sequences (Q445032) (← links)
- Minimizing finite sums with the stochastic average gradient (Q517295) (← links)
- Distributed stochastic subgradient projection algorithms for convex optimization (Q620442) (← links)
- Incremental proximal methods for large scale convex optimization (Q644913) (← links)
- Discrete-time gradient flows and law of large numbers in Alexandrov spaces (Q745561) (← links)
- Convergence rates of subgradient methods for quasi-convex optimization problems (Q782917) (← links)
- Strong law of large numbers for the \(L^1\)-Karcher mean (Q785879) (← links)
- Subgradient methods for saddle-point problems (Q1035898) (← links)
- An incremental subgradient method on Riemannian manifolds (Q1752648) (← links)
- Modified Fejér sequences and applications (Q1790672) (← links)
- Analysis of biased stochastic gradient descent using sequential semidefinite programs (Q2020610) (← links)
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization (Q2033403) (← links)
- Inexact first-order primal-dual algorithms (Q2181598) (← links)
- Bridging the gap between constant step size stochastic gradient descent and Markov chains (Q2196224) (← links)
- Faster subgradient methods for functions with Hölderian growth (Q2297653) (← links)
- Lagrangian relaxation of the generic materials and operations planning model (Q2303321) (← links)
- A globally convergent incremental Newton method (Q2349125) (← links)
- Incremental gradient-free method for nonsmooth distributed optimization (Q2411165) (← links)
- A subgradient method with non-monotone line search (Q2696908) (← links)
- Stochastic First-Order Methods with Random Constraint Projection (Q2796796) (← links)
- Achieving Geometric Convergence for Distributed Optimization Over Time-Varying Graphs (Q4602346) (← links)
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods (Q4634094) (← links)
- A robust multi-batch L-BFGS method for machine learning (Q4972551) (← links)
- Weakly Convex Optimization over Stiefel Manifold Using Riemannian Subgradient-Type Methods (Q5003207) (← links)
- Subgradient method with feasible inexact projections for constrained convex optimization problems (Q5045169) (← links)
- (Q5053196) (← links)
- Analysis of the BFGS Method with Errors (Q5210518) (← links)
- Nonconvex Robust Low-Rank Matrix Recovery (Q5217366) (← links)
- Path-based incremental target level algorithm on Riemannian manifolds (Q5221271) (← links)
- Convergence Rate of Incremental Gradient and Incremental Newton Methods (Q5237308) (← links)
- Adaptive Sequential Sample Average Approximation for Solving Two-Stage Stochastic Linear Programs (Q5857298) (← links)
- Semi-discrete optimal transport: hardness, regularization and numerical solution (Q6038666) (← links)
- A trust region method for noisy unconstrained optimization (Q6052069) (← links)
- A review of decentralized optimization focused on information flows of decomposition algorithms (Q6164389) (← links)