Optimal Distributed Online Prediction using Mini-Batches

From MaRDI portal
Publication:5405113

zbMath1283.68404arXiv1012.1367MaRDI QIDQ5405113

Ofer Dekel, Lin Xiao, Ohad Shamir, Ran Gilad-Bachrach

Publication date: 1 April 2014

Full work available at URL: https://arxiv.org/abs/1012.1367




Related Items (40)

Stochastic distributed learning with gradient quantization and double-variance reductionStochastic gradient descent for semilinear elliptic equations with uncertaintiesDistributed optimization and statistical learning for large-scale penalized expectile regressionGraph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient DescentConsensus-based modeling using distributed feature construction with ILPQuantile-Based Iterative Methods for Corrupted Systems of Linear EquationsComplexity Analysis of stochastic gradient methods for PDE-constrained optimal Control Problems with uncertain parametersMulti-round smoothed composite quantile regression for distributed dataVaidya's method for convex stochastic optimization problems in small dimensionUnifying mirror descent and dual averagingSemi-discrete optimal transport: hardness, regularization and numerical solutionNon-smooth setting of stochastic decentralized convex optimization problem over time-varying graphsFeature-aware regularization for sparse online learningOnline Estimation for Functional DataScaling up stochastic gradient descent for non-convex optimisationOn the convergence analysis of asynchronous SGD for solving consistent linear systemsCommunication-efficient sparse composite quantile regression for distributed dataOn the parallelization upper bound for asynchronous stochastic gradients descent in non-convex optimizationBatched Stochastic Gradient Descent with Weighted SamplingA sparsity preserving stochastic gradient methods for sparse regressionUnnamed ItemRevisiting EXTRA for Smooth Distributed OptimizationUnnamed ItemDistributed learning for random vector functional-link networksSample size selection in optimization methods for machine learningUnnamed ItemUnnamed ItemRandom Gradient Extrapolation for Distributed and Stochastic OptimizationA modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational boundsAccelerating deep neural network training with inconsistent stochastic gradient descentLikelihood Inference for Large Scale Stochastic Blockmodels With Covariates Based on a Divide-and-Conquer Parallelizable Algorithm With CommunicationMultiLevel Composite Stochastic Optimization via Nested Variance ReductionHarder, Better, Faster, Stronger Convergence Rates for Least-Squares RegressionOptimal Rates for Multi-pass Stochastic Gradient MethodsLinear Coupling: An Ultimate Unification of Gradient and Mirror DescentUnnamed ItemStochastic variance-reduced prox-linear algorithms for nonconvex composite optimizationUnnamed ItemUnnamed ItemOnline Learning of a Weighted Selective Naive Bayes Classifier with Non-convex Optimization




This page was built for publication: Optimal Distributed Online Prediction using Mini-Batches