A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds
DOI10.1016/J.TCS.2019.11.015zbMATH Open1436.68306OpenAlexW2985478892MaRDI QIDQ2290691FDOQ2290691
Authors: Pooria Joulani, András György, Csaba Szepesvári
Publication date: 29 January 2020
Published in: Theoretical Computer Science (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.tcs.2019.11.015
Recommendations
- A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, and variational bounds
- A survey of algorithms and analysis for adaptive online learning
- Regret bounded by gradual variation for online convex optimization
- Analysis of Online Composite Mirror Descent Algorithm
- Scale-free online learning
stochastic optimizationadaptive algorithmsonline learningmirror descentregret boundfollow the regularized leader
Learning and adaptive systems in artificial intelligence (68T05) Online algorithms; streaming algorithms (68W27) Stochastic programming (90C15)
Cites Work
- Adaptive subgradient methods for online learning and stochastic optimization
- Prediction, Learning, and Games
- Convex analysis and monotone operator theory in Hilbert spaces
- Deep learning
- Primal-dual subgradient methods for convex problems
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Title not available (Why is that?)
- Proximal Minimization Methods with Generalized Bregman Functions
- Logarithmic regret algorithms for online convex optimization
- Stochastic optimal control. The discrete time case
- Online learning and online convex optimization
- An optimal method for stochastic composite optimization
- Title not available (Why is that?)
- Solving variational inequalities with stochastic mirror-prox algorithm
- Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization
- Cubic regularization of Newton method and its global performance
- Exponentiated gradient versus gradient descent for linear predictors
- Optimal distributed online prediction using mini-batches
- A generalized online mirror descent with applications to classification and regression
- Title not available (Why is that?)
- A survey of algorithms and analysis for adaptive online learning
- Gradient descent learns linear dynamical systems
Cited In (4)
- Online composite optimization with time-varying regularizers
- Optimistic optimisation of composite objective with exponentiated update
- A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, and variational bounds
- A survey of algorithms and analysis for adaptive online learning
Uses Software
This page was built for publication: A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2290691)