Optimal Distributed Online Prediction using Mini-Batches
From MaRDI portal
Publication:5405113
zbMath1283.68404arXiv1012.1367MaRDI QIDQ5405113
Ofer Dekel, Lin Xiao, Ohad Shamir, Ran Gilad-Bachrach
Publication date: 1 April 2014
Full work available at URL: https://arxiv.org/abs/1012.1367
Convex programming (90C25) Distributed algorithms (68W15) Online algorithms; streaming algorithms (68W27) Probability in computer science (algorithm analysis, random structures, phase transitions, etc.) (68Q87)
Related Items (40)
Stochastic distributed learning with gradient quantization and double-variance reduction ⋮ Stochastic gradient descent for semilinear elliptic equations with uncertainties ⋮ Distributed optimization and statistical learning for large-scale penalized expectile regression ⋮ Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent ⋮ Consensus-based modeling using distributed feature construction with ILP ⋮ Quantile-Based Iterative Methods for Corrupted Systems of Linear Equations ⋮ Complexity Analysis of stochastic gradient methods for PDE-constrained optimal Control Problems with uncertain parameters ⋮ Multi-round smoothed composite quantile regression for distributed data ⋮ Vaidya's method for convex stochastic optimization problems in small dimension ⋮ Unifying mirror descent and dual averaging ⋮ Semi-discrete optimal transport: hardness, regularization and numerical solution ⋮ Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs ⋮ Feature-aware regularization for sparse online learning ⋮ Online Estimation for Functional Data ⋮ Scaling up stochastic gradient descent for non-convex optimisation ⋮ On the convergence analysis of asynchronous SGD for solving consistent linear systems ⋮ Communication-efficient sparse composite quantile regression for distributed data ⋮ On the parallelization upper bound for asynchronous stochastic gradients descent in non-convex optimization ⋮ Batched Stochastic Gradient Descent with Weighted Sampling ⋮ A sparsity preserving stochastic gradient methods for sparse regression ⋮ Unnamed Item ⋮ Revisiting EXTRA for Smooth Distributed Optimization ⋮ Unnamed Item ⋮ Distributed learning for random vector functional-link networks ⋮ Sample size selection in optimization methods for machine learning ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Random Gradient Extrapolation for Distributed and Stochastic Optimization ⋮ A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds ⋮ Accelerating deep neural network training with inconsistent stochastic gradient descent ⋮ Likelihood Inference for Large Scale Stochastic Blockmodels With Covariates Based on a Divide-and-Conquer Parallelizable Algorithm With Communication ⋮ MultiLevel Composite Stochastic Optimization via Nested Variance Reduction ⋮ Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression ⋮ Optimal Rates for Multi-pass Stochastic Gradient Methods ⋮ Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent ⋮ Unnamed Item ⋮ Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Online Learning of a Weighted Selective Naive Bayes Classifier with Non-convex Optimization
This page was built for publication: Optimal Distributed Online Prediction using Mini-Batches