Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization

From MaRDI portal
Revision as of 20:44, 8 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5271983

DOI10.1109/TIT.2011.2182178zbMath1365.94132arXiv1009.0571WikidataQ105584794 ScholiaQ105584794MaRDI QIDQ5271983

Alekh Agarwal, Bartlett, Peter L., Pradeep Ravikumar, Martin J. Wainwright

Publication date: 12 July 2017

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1009.0571




Related Items (48)

Learning models with uniform performance via distributionally robust optimizationStochastic gradient descent for semilinear elliptic equations with uncertaintiesOracle lower bounds for stochastic gradient sampling algorithmsQNG: A Quasi-Natural Gradient Method for Large-Scale Statistical LearningSub-linear convergence of a stochastic proximal iteration method in Hilbert spaceStochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex caseAsymptotic optimality in stochastic optimizationStatistical Query Algorithms for Mean Vector Estimation and Stochastic Convex OptimizationslimTrain---A Stochastic Approximation Method for Training Separable Deep Neural NetworksLower bounds for non-convex stochastic optimizationAsynchronous fully-decentralized SGD in the cluster-based modelImproved variance reduction extragradient method with line search for stochastic variational inequalitiesConvergence rates of accelerated proximal gradient algorithms under independent noiseOptimal non-asymptotic analysis of the Ruppert-Polyak averaging stochastic algorithmLinear convergence of cyclic SAGAUnnamed ItemThe right complexity measure in locally private estimation: it is not the Fisher informationConvergence of Random Reshuffling under the Kurdyka–Łojasiewicz InequalitySample average approximations of strongly convex stochastic programs in Hilbert spacesDifferentially private inference via noisy optimizationOn the Adaptivity of Stochastic Gradient-Based OptimizationLower bounds for finding stationary points IVariance-Based Extragradient Methods with Line Search for Stochastic Variational InequalitiesAnalysis of Online Composite Mirror Descent AlgorithmOn the information-based complexity of stochastic programmingWhy random reshuffling beats stochastic gradient descentOn variance reduction for stochastic smooth convex optimization with multiplicative noiseStochastic gradient descent with Polyak's learning rateOptimization Methods for Large-Scale Machine LearningStochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functionsGradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplexUnnamed ItemThe exact information-based complexity of smooth convex minimizationNear-optimal stochastic approximation for online principal component estimationMinimizing finite sums with the stochastic average gradientAnalysis of biased stochastic gradient descent using sequential semidefinite programsUnnamed ItemUnnamed ItemUnnamed ItemConvergence of online mirror descentStochastic First-Order Methods with Random Constraint ProjectionDerivative-free optimization methodsAn ODE method to prove the geometric convergence of adaptive stochastic algorithmsUnnamed ItemPrivacy Aware LearningDistributed stochastic subgradient projection algorithms based on weight-balancing over time-varying directed graphsDeterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimizationA hybrid stochastic optimization framework for composite nonconvex optimization




This page was built for publication: Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization