Pages that link to "Item:Q517295"
From MaRDI portal
The following pages link to Minimizing finite sums with the stochastic average gradient (Q517295):
Displaying 50 items.
- Stochastic accelerated alternating direction method of multipliers with importance sampling (Q1626518) (← links)
- Inexact proximal stochastic gradient method for convex composite optimization (Q1694394) (← links)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise (Q1739038) (← links)
- Convergence of stochastic proximal gradient algorithm (Q2019902) (← links)
- Point process estimation with Mirror Prox algorithms (Q2019904) (← links)
- Generalized stochastic Frank-Wolfe algorithm with stochastic ``substitute'' gradient for structured convex optimization (Q2020608) (← links)
- Analysis of biased stochastic gradient descent using sequential semidefinite programs (Q2020610) (← links)
- Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods (Q2023684) (← links)
- Convergence rates for optimised adaptive importance samplers (Q2029096) (← links)
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization (Q2033403) (← links)
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching (Q2039235) (← links)
- Relative utility bounds for empirically optimal portfolios (Q2040434) (← links)
- Multivariate goodness-of-fit tests based on Wasserstein distance (Q2044339) (← links)
- Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences (Q2044479) (← links)
- A stochastic primal-dual method for optimization with conditional value at risk constraints (Q2046691) (← links)
- On the regularization effect of stochastic gradient descent applied to least-squares (Q2055514) (← links)
- Stochastic DCA for minimizing a large sum of DC functions with application to multi-class logistic regression (Q2057761) (← links)
- Analysis of stochastic gradient descent in continuous time (Q2058762) (← links)
- Fast incremental expectation maximization for finite-sum optimization: nonasymptotic convergence (Q2058782) (← links)
- Fully asynchronous policy evaluation in distributed reinforcement learning over networks (Q2063869) (← links)
- The recursive variational Gaussian approximation (R-VGA) (Q2066753) (← links)
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization (Q2086938) (← links)
- Multi-agent reinforcement learning: a selective overview of theories and algorithms (Q2094040) (← links)
- A hierarchically low-rank optimal transport dissimilarity measure for structured data (Q2098781) (← links)
- A stochastic first-order trust-region method with inexact restoration for finite-sum minimization (Q2111466) (← links)
- Accelerating variance-reduced stochastic gradient methods (Q2118092) (← links)
- A hybrid stochastic optimization framework for composite nonconvex optimization (Q2118109) (← links)
- Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problems (Q2133414) (← links)
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization (Q2149551) (← links)
- Finite-sum smooth optimization with SARAH (Q2149950) (← links)
- Accelerating incremental gradient optimization with curvature information (Q2181597) (← links)
- The multiproximal linearization method for convex composite problems (Q2191762) (← links)
- Linear convergence of cyclic SAGA (Q2193004) (← links)
- Bi-fidelity stochastic gradient descent for structural optimization under uncertainty (Q2221705) (← links)
- A linearly convergent stochastic recursive gradient method for convex optimization (Q2228399) (← links)
- Leveraged least trimmed absolute deviations (Q2241912) (← links)
- A stochastic trust region method for unconstrained optimization problems (Q2298821) (← links)
- Provable accelerated gradient method for nonconvex low rank optimization (Q2303662) (← links)
- Deep relaxation: partial differential equations for optimizing deep neural networks (Q2319762) (← links)
- Convergence rates of accelerated proximal gradient algorithms under independent noise (Q2420162) (← links)
- Generalized forward-backward splitting with penalization for monotone inclusion problems (Q2423787) (← links)
- An accelerated variance reducing stochastic method with Douglas-Rachford splitting (Q2425236) (← links)
- Minimizing robust estimates of sums of parameterized functions (Q2667873) (← links)
- Improving kernel online learning with a snapshot memory (Q2673322) (← links)
- Cocoercivity, smoothness and bias in variance-reduced stochastic gradient methods (Q2674579) (← links)
- Adaptive Sampling for Incremental Optimization Using Stochastic Gradient Descent (Q2835640) (← links)
- An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization (Q3451763) (← links)
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice (Q4558545) (← links)
- Adaptive Sampling Strategies for Stochastic Optimization (Q4562248) (← links)
- (Q4583459) (← links)