Information-Based Complexity, Feedback and Dynamics in Convex Programming
From MaRDI portal
Publication:5272322
DOI10.1109/TIT.2011.2154375zbMath1365.93191arXiv1010.2285OpenAlexW2155192981MaRDI QIDQ5272322
Maxim Raginsky, Alexander Rakhlin
Publication date: 12 July 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1010.2285
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Feedback control (93B52) Information theory (general) (94A15)
Related Items (12)
Oracle lower bounds for stochastic gradient sampling algorithms ⋮ Sub-linear convergence of a stochastic proximal iteration method in Hilbert space ⋮ Localization of VC classes: beyond local Rademacher complexities ⋮ Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems ⋮ Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization ⋮ Lower bounds for non-convex stochastic optimization ⋮ Stochastic gradient descent with Polyak's learning rate ⋮ Optimization Methods for Large-Scale Machine Learning ⋮ The exact information-based complexity of smooth convex minimization ⋮ Surrogate losses in passive and active learning ⋮ Lower error bounds for the stochastic gradient descent optimization algorithm: sharp convergence rates for slowly and fast decaying learning rates ⋮ Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization
This page was built for publication: Information-Based Complexity, Feedback and Dynamics in Convex Programming