Pages that link to "Item:Q2307433"
From MaRDI portal
The following pages link to First-order and stochastic optimization methods for machine learning (Q2307433):
Displayed 50 items.
- Solving convex min-min problems with smoothness and strong convexity in one group of variables and low dimension in the other (Q2069678) (← links)
- A new restricted memory level bundle method for constrained convex nonsmooth optimization (Q2080833) (← links)
- Dualize, split, randomize: toward fast nonsmooth optimization algorithms (Q2082232) (← links)
- Stochastic relaxed inertial forward-backward-forward splitting for monotone inclusions in Hilbert spaces (Q2082546) (← links)
- Dimension independent excess risk by stochastic gradient descent (Q2084455) (← links)
- Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization (Q2095692) (← links)
- Finite-sample analysis of nonlinear stochastic approximation with applications in reinforcement learning (Q2097782) (← links)
- Decentralized convex optimization under affine constraints for power systems control (Q2104293) (← links)
- Network manipulation algorithm based on inexact alternating minimization (Q2109010) (← links)
- Distributionally robust optimization with moment ambiguity sets (Q2111170) (← links)
- Inertial accelerated SGD algorithms for solving large-scale lower-rank tensor CP decomposition problems (Q2112682) (← links)
- Complexity of stochastic dual dynamic programming (Q2118093) (← links)
- Spatiotemporal-textual point processes for crime linkage detection (Q2154224) (← links)
- A stochastic primal-dual method for a class of nonconvex constrained optimization (Q2162528) (← links)
- Accelerated methods for saddle-point problem (Q2214606) (← links)
- Frank-Wolfe and friends: a journey into projection-free first-order optimization methods (Q2240671) (← links)
- Policy mirror descent for reinforcement learning: linear convergence, new sampling complexity, and generalized problem classes (Q2687069) (← links)
- Sample average approximations of strongly convex stochastic programs in Hilbert spaces (Q2688927) (← links)
- Stochastic first-order methods for convex and nonconvex functional constrained optimization (Q2689819) (← links)
- Efficient Algorithms for Distributionally Robust Stochastic Optimization with Discrete Scenario Support (Q5003210) (← links)
- Learning over No-Preferred and Preferred Sequence of Items for Robust Recommendation (Q5009685) (← links)
- Finite-Time Analysis and Restarting Scheme for Linear Two-Time-Scale Stochastic Approximation (Q5009779) (← links)
- Frank--Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning (Q5055686) (← links)
- Solving Stochastic Optimization with Expectation Constraints Efficiently by a Stochastic Augmented Lagrangian-Type Algorithm (Q5060780) (← links)
- Differentially Private Accelerated Optimization Algorithms (Q5080503) (← links)
- Simple and Optimal Methods for Stochastic Variational Inequalities, II: Markovian Noise and Policy Evaluation in Reinforcement Learning (Q5081106) (← links)
- Simple and Optimal Methods for Stochastic Variational Inequalities, I: Operator Extrapolation (Q5097022) (← links)
- Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints (Q5158760) (← links)
- (Q5878623) (← links)
- Subgradient ellipsoid method for nonsmooth convex problems (Q6038646) (← links)
- Convergence analysis of a subsampled Levenberg-Marquardt algorithm (Q6047687) (← links)
- A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems (Q6051310) (← links)
- Universal Conditional Gradient Sliding for Convex Optimization (Q6071883) (← links)
- Accelerated gradient methods with absolute and relative noise in the gradient (Q6087056) (← links)
- Finite-time convergence rates of distributed local stochastic approximation (Q6088356) (← links)
- A unified analysis of stochastic gradient‐free Frank–Wolfe methods (Q6092499) (← links)
- Optimistic optimisation of composite objective with exponentiated update (Q6097136) (← links)
- A distributed proximal gradient method with time-varying delays for solving additive convex optimizations (Q6110428) (← links)
- A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems (Q6110456) (← links)
- Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization (Q6114954) (← links)
- Accelerated variance-reduced methods for saddle-point problems (Q6114960) (← links)
- Optimal Methods for Convex Risk-Averse Distributed Optimization (Q6116242) (← links)
- Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization (Q6116247) (← links)
- No-regret dynamics in the Fenchel game: a unified framework for algorithmic convex optimization (Q6126650) (← links)
- Randomized Douglas–Rachford Methods for Linear Systems: Improved Accuracy and Efficiency (Q6130543) (← links)
- Variable sample-size operator extrapolation algorithm for stochastic mixed variational inequalities (Q6131490) (← links)
- Optimal Algorithms for Stochastic Complementary Composite Minimization (Q6136660) (← links)
- Decentralized saddle-point problems with different constants of strong convexity and strong concavity (Q6149570) (← links)
- Stochastic regularized Newton methods for nonlinear equations (Q6158978) (← links)
- Block mirror stochastic gradient method for stochastic optimization (Q6158991) (← links)