Optimistic optimisation of composite objective with exponentiated update
From MaRDI portal
Publication:6097136
DOI10.1007/S10994-022-06229-1arXiv2208.04065OpenAlexW4292663683MaRDI QIDQ6097136FDOQ6097136
Authors: Weijia Shao, Fikret Sivrikaya, Şahin Albayrak
Publication date: 12 June 2023
Published in: Machine Learning (Search for Journal in Brave)
Abstract: This paper proposes a new family of algorithms for the online optimisation of composite objectives. The algorithms can be interpreted as the combination of the exponentiated gradient and -norm algorithm. Combined with algorithmic ideas of adaptivity and optimism, the proposed algorithms achieve a sequence-dependent regret upper bound, matching the best-known bounds for sparse target decision variables. Furthermore, the algorithms have efficient implementations for popular composite objectives and constraints and can be converted to stochastic optimisation algorithms with the optimal accelerated rate for smooth objectives.
Full work available at URL: https://arxiv.org/abs/2208.04065
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Adaptive subgradient methods for online learning and stochastic optimization
- Introductory lectures on convex optimization. A basic course.
- Linear coupling: an ultimate unification of gradient and mirror descent
- Convexity and optimization in Banach spaces.
- Title not available (Why is that?)
- The multiplicative weights update method: a meta-algorithm and applications
- On the Generalization Ability of On-Line Learning Algorithms
- Regularization techniques for learning with matrices
- Exponentiated gradient versus gradient descent for linear predictors
- The robustness of the \(p\)-norm algorithms
- Improved Risk Tail Bounds for On-Line Algorithms
- A generalized online mirror descent with applications to classification and regression
- Smoothed Low Rank and Sparse Matrix Recovery by Iteratively Reweighted Least Squares Minimization
- Scale-free online learning
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- First-order and stochastic optimization methods for machine learning
Cited In (1)
This page was built for publication: Optimistic optimisation of composite objective with exponentiated update
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6097136)