Analysis of Online Composite Mirror Descent Algorithm
DOI10.1162/NECO_A_00930zbMATH Open1474.68314DBLPjournals/neco/LeiZ17OpenAlexW2578544862WikidataQ39016839 ScholiaQ39016839MaRDI QIDQ5380674FDOQ5380674
Authors: Yunwen Lei, Ding-Xuan Zhou
Publication date: 6 June 2019
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco_a_00930
Recommendations
- On the efficiency of a randomized mirror descent algorithm in online optimization problems
- Distributed Mirror Descent for Online Composite Optimization
- Mirror descent and constrained online optimization problems
- Convergence of online mirror descent
- A generalized online mirror descent with applications to classification and regression
- Distributed Online Optimization in Dynamic Environments Using Mirror Descent
- Convergence analysis of online algorithms
- Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization
- Validation analysis of mirror descent stochastic approximation method
- Competitive analysis for multi-objective online algorithms
Convex programming (90C25) Computational aspects of data analysis and big data (68T09) Online algorithms; streaming algorithms (68W27) Analysis of algorithms (68W40)
Cites Work
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Online learning algorithms
- Support Vector Machines
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Title not available (Why is that?)
- Linearized Bregman iterations for compressed sensing
- Support vector machine soft margin classifiers: error analysis
- Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
- Title not available (Why is that?)
- Sharp uniform convexity and smoothness inequalities for trace norms
- Convergence of stochastic proximal gradient algorithm
- Efficient online and batch learning using forward backward splitting
- On the Generalization Ability of On-Line Learning Algorithms
- Online gradient descent learning algorithms
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Regularization schemes for minimum error entropy principle
- ONLINE LEARNING WITH MARKOV SAMPLING
- Online Regularized Classification Algorithms
- Unregularized online learning algorithms with general loss functions
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Learning theory of randomized Kaczmarz algorithm
- Modified Fejér sequences and applications
- Iterative regularization for learning with convex loss functions
- Online Pairwise Learning Algorithms
Cited In (10)
- Learning theory of randomized sparse Kaczmarz method
- Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
- Analysis of singular value thresholding algorithm for matrix completion
- Convergence of online mirror descent
- Block coordinate type methods for optimization and learning
- A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds
- A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, and variational bounds
- A stochastic variance reduction algorithm with Bregman distances for structured composite problems
- Sparse online regression algorithm with insensitive loss functions
- Federated learning for minimizing nonsmooth convex loss functions
This page was built for publication: Analysis of Online Composite Mirror Descent Algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5380674)