Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization

From MaRDI portal
Publication:5271983

DOI10.1109/TIT.2011.2182178zbMath1365.94132arXiv1009.0571WikidataQ105584794 ScholiaQ105584794MaRDI QIDQ5271983

Alekh Agarwal, Bartlett, Peter L., Pradeep Ravikumar, Martin J. Wainwright

Publication date: 12 July 2017

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1009.0571



Related Items

Learning models with uniform performance via distributionally robust optimization, Stochastic gradient descent for semilinear elliptic equations with uncertainties, Oracle lower bounds for stochastic gradient sampling algorithms, QNG: A Quasi-Natural Gradient Method for Large-Scale Statistical Learning, Sub-linear convergence of a stochastic proximal iteration method in Hilbert space, Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case, Asymptotic optimality in stochastic optimization, Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization, slimTrain---A Stochastic Approximation Method for Training Separable Deep Neural Networks, Lower bounds for non-convex stochastic optimization, Asynchronous fully-decentralized SGD in the cluster-based model, Improved variance reduction extragradient method with line search for stochastic variational inequalities, Convergence rates of accelerated proximal gradient algorithms under independent noise, Optimal non-asymptotic analysis of the Ruppert-Polyak averaging stochastic algorithm, Linear convergence of cyclic SAGA, Unnamed Item, The right complexity measure in locally private estimation: it is not the Fisher information, Convergence of Random Reshuffling under the Kurdyka–Łojasiewicz Inequality, Sample average approximations of strongly convex stochastic programs in Hilbert spaces, Differentially private inference via noisy optimization, On the Adaptivity of Stochastic Gradient-Based Optimization, Lower bounds for finding stationary points I, Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities, Analysis of Online Composite Mirror Descent Algorithm, On the information-based complexity of stochastic programming, Why random reshuffling beats stochastic gradient descent, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Stochastic gradient descent with Polyak's learning rate, Optimization Methods for Large-Scale Machine Learning, Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions, Gradient-free proximal methods with inexact oracle for convex stochastic nonsmooth optimization problems on the simplex, Unnamed Item, The exact information-based complexity of smooth convex minimization, Near-optimal stochastic approximation for online principal component estimation, Minimizing finite sums with the stochastic average gradient, Analysis of biased stochastic gradient descent using sequential semidefinite programs, Unnamed Item, Unnamed Item, Unnamed Item, Convergence of online mirror descent, Stochastic First-Order Methods with Random Constraint Projection, Derivative-free optimization methods, An ODE method to prove the geometric convergence of adaptive stochastic algorithms, Unnamed Item, Privacy Aware Learning, Distributed stochastic subgradient projection algorithms based on weight-balancing over time-varying directed graphs, Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization, A hybrid stochastic optimization framework for composite nonconvex optimization