Convergence of online mirror descent
DOI10.1016/J.ACHA.2018.05.005zbMATH Open1494.68219arXiv1802.06357OpenAlexW2803423166WikidataQ129764098 ScholiaQ129764098MaRDI QIDQ2278461FDOQ2278461
Authors: Yunwen Lei, Ding-Xuan Zhou
Publication date: 5 December 2019
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1802.06357
Recommendations
- Analysis of Online Composite Mirror Descent Algorithm
- A generalized online mirror descent with applications to classification and regression
- Convergence of unregularized online learning algorithms
- On the convergence of mirror descent beyond stochastic convex programming
- Convergence analysis of online algorithms
Learning and adaptive systems in artificial intelligence (68T05) Convex programming (90C25) Online algorithms; streaming algorithms (68W27) Stochastic approximation (62L20) Stochastic programming (90C15)
Cites Work
- A randomized Kaczmarz algorithm with exponential convergence
- Robust Estimation of a Location Parameter
- Online learning algorithms
- A Stochastic Approximation Method
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Robust Stochastic Approximation Approach to Stochastic Programming
- AIR tools -- a MATLAB package of algebraic iterative reconstruction methods
- Linearized Bregman iterations for compressed sensing
- Support vector machine soft margin classifiers: error analysis
- Title not available (Why is that?)
- Sharp uniform convexity and smoothness inequalities for trace norms
- Incremental subgradient methods for nondifferentiable optimization
- Multi-kernel regularized classifiers
- Online gradient descent learning algorithms
- Title not available (Why is that?)
- Regularization schemes for minimum error entropy principle
- Regularization techniques for learning with matrices
- On Complexity Issues of Online Learning Algorithms
- Online Regularized Classification Algorithms
- Online regularized learning with pairwise loss functions
- Almost sure convergence of the Kaczmarz algorithm with random measurements
- Unregularized online learning algorithms with general loss functions
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Distributed learning with regularized least squares
- Learning theory of randomized Kaczmarz algorithm
- Optimization methods for large-scale machine learning
- Learning theory of distributed spectral algorithms
- Thresholded spectral algorithms for sparse approximations
- Analysis of Online Composite Mirror Descent Algorithm
Cited In (11)
- Conformal mirror descent with logarithmic divergences
- Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
- Analysis of singular value thresholding algorithm for matrix completion
- Fast and strong convergence of online learning algorithms
- Analysis of Online Composite Mirror Descent Algorithm
- The information geometry of mirror descent
- Block coordinate type methods for optimization and learning
- Analogues of switching subgradient schemes for relatively Lipschitz-continuous convex programming problems
- High probability bounds on AdaGrad for constrained weakly convex optimization
- Distributed learning and distribution regression of coefficient regularization
- Federated learning for minimizing nonsmooth convex loss functions
Uses Software
This page was built for publication: Convergence of online mirror descent
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2278461)