Optimal Distributed Online Prediction using Mini-Batches

From MaRDI portal
Publication:5405113


zbMath1283.68404arXiv1012.1367MaRDI QIDQ5405113

Ofer Dekel, Lin Xiao, Ohad Shamir, Ran Gilad-Bachrach

Publication date: 1 April 2014

Full work available at URL: https://arxiv.org/abs/1012.1367


90C25: Convex programming

68W15: Distributed algorithms

68W27: Online algorithms; streaming algorithms

68Q87: Probability in computer science (algorithm analysis, random structures, phase transitions, etc.)


Related Items

Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Batched Stochastic Gradient Descent with Weighted Sampling, Random Gradient Extrapolation for Distributed and Stochastic Optimization, Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression, Optimal Rates for Multi-pass Stochastic Gradient Methods, Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent, Unnamed Item, MultiLevel Composite Stochastic Optimization via Nested Variance Reduction, Unnamed Item, Online Learning of a Weighted Selective Naive Bayes Classifier with Non-convex Optimization, Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent, Quantile-Based Iterative Methods for Corrupted Systems of Linear Equations, Complexity Analysis of stochastic gradient methods for PDE-constrained optimal Control Problems with uncertain parameters, Stochastic distributed learning with gradient quantization and double-variance reduction, Unnamed Item, Unifying mirror descent and dual averaging, Semi-discrete optimal transport: hardness, regularization and numerical solution, Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs, Online Estimation for Functional Data, Scaling up stochastic gradient descent for non-convex optimisation, A sparsity preserving stochastic gradient methods for sparse regression, Distributed learning for random vector functional-link networks, Sample size selection in optimization methods for machine learning, Feature-aware regularization for sparse online learning, Consensus-based modeling using distributed feature construction with ILP, Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization, Stochastic gradient descent for semilinear elliptic equations with uncertainties, Distributed optimization and statistical learning for large-scale penalized expectile regression, Multi-round smoothed composite quantile regression for distributed data, Vaidya's method for convex stochastic optimization problems in small dimension, A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds, Accelerating deep neural network training with inconsistent stochastic gradient descent, On the convergence analysis of asynchronous SGD for solving consistent linear systems, Communication-efficient sparse composite quantile regression for distributed data, On the parallelization upper bound for asynchronous stochastic gradients descent in non-convex optimization, Revisiting EXTRA for Smooth Distributed Optimization, Likelihood Inference for Large Scale Stochastic Blockmodels With Covariates Based on a Divide-and-Conquer Parallelizable Algorithm With Communication