Pages that link to "Item:Q5271983"
From MaRDI portal
The following pages link to Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization (Q5271983):
Displaying 48 items.
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions (Q507334) (← links)
- Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex (Q510299) (← links)
- The exact information-based complexity of smooth convex minimization (Q511109) (← links)
- Minimizing finite sums with the stochastic average gradient (Q517295) (← links)
- Near-optimal stochastic approximation for online principal component estimation (Q681490) (← links)
- Learning models with uniform performance via distributionally robust optimization (Q820804) (← links)
- On variance reduction for stochastic smooth convex optimization with multiplicative noise (Q1739038) (← links)
- Stochastic gradient descent with Polyak's learning rate (Q1983178) (← links)
- Analysis of biased stochastic gradient descent using sequential semidefinite programs (Q2020610) (← links)
- An ODE method to prove the geometric convergence of adaptive stochastic algorithms (Q2074991) (← links)
- A hybrid stochastic optimization framework for composite nonconvex optimization (Q2118109) (← links)
- Stochastic gradient descent for semilinear elliptic equations with uncertainties (Q2127008) (← links)
- Oracle lower bounds for stochastic gradient sampling algorithms (Q2137007) (← links)
- Sub-linear convergence of a stochastic proximal iteration method in Hilbert space (Q2162529) (← links)
- Linear convergence of cyclic SAGA (Q2193004) (← links)
- Lower bounds for finding stationary points I (Q2205972) (← links)
- Why random reshuffling beats stochastic gradient descent (Q2227529) (← links)
- Convergence of online mirror descent (Q2278461) (← links)
- Distributed stochastic subgradient projection algorithms based on weight-balancing over time-varying directed graphs (Q2331322) (← links)
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case (Q2397263) (← links)
- Convergence rates of accelerated proximal gradient algorithms under independent noise (Q2420162) (← links)
- On the information-based complexity of stochastic programming (Q2450744) (← links)
- Asymptotic optimality in stochastic optimization (Q2656586) (← links)
- Optimal non-asymptotic analysis of the Ruppert-Polyak averaging stochastic algorithm (Q2680399) (← links)
- Sample average approximations of strongly convex stochastic programs in Hilbert spaces (Q2688927) (← links)
- Stochastic First-Order Methods with Random Constraint Projection (Q2796796) (← links)
- Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization (Q2921184) (← links)
- (Q4558495) (← links)
- (Q4558562) (← links)
- (Q4558567) (← links)
- Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization (Q4575825) (← links)
- Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities (Q4620417) (← links)
- Optimization Methods for Large-Scale Machine Learning (Q4641709) (← links)
- (Q4998961) (← links)
- (Q5053263) (← links)
- QNG: A Quasi-Natural Gradient Method for Large-Scale Statistical Learning (Q5067429) (← links)
- slimTrain---A Stochastic Approximation Method for Training Separable Deep Neural Networks (Q5095499) (← links)
- On the Adaptivity of Stochastic Gradient-Based Optimization (Q5114394) (← links)
- (Q5148937) (← links)
- Derivative-free optimization methods (Q5230522) (← links)
- Analysis of Online Composite Mirror Descent Algorithm (Q5380674) (← links)
- Privacy Aware Learning (Q5501941) (← links)
- Lower bounds for non-convex stochastic optimization (Q6038643) (← links)
- Asynchronous fully-decentralized SGD in the cluster-based model (Q6057314) (← links)
- Improved variance reduction extragradient method with line search for stochastic variational inequalities (Q6064028) (← links)
- The right complexity measure in locally private estimation: it is not the Fisher information (Q6151956) (← links)
- Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality (Q6161313) (← links)
- Differentially private inference via noisy optimization (Q6183772) (← links)