scientific article; zbMATH DE number 6860827
From MaRDI portal
Publication:4637046
zbMath1435.68380MaRDI QIDQ4637046
Tianbao Yang, Qihang Lin, Tengyu Ma, Jason D. Lee
Publication date: 17 April 2018
Full work available at URL: http://jmlr.csail.mit.edu/papers/v18/16-640.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
lower boundfirst-order methodcommunication complexitydistributed optimizationstochastic variance-reduced gradient
Ridge regression; shrinkage estimators (Lasso) (62J07) Convex programming (90C25) Applications of mathematical programming (90C90) Stochastic programming (90C15) Distributed algorithms (68W15)
Related Items (9)
DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization ⋮ First-Order Newton-Type Estimator for Distributed Estimation and Inference ⋮ On the convergence analysis of asynchronous SGD for solving consistent linear systems ⋮ Stochastic regularized Newton methods for nonlinear equations ⋮ Communication-Efficient Accurate Statistical Estimation ⋮ Unnamed Item ⋮ Communication-Efficient Distributed Statistical Inference ⋮ Unnamed Item ⋮ Unnamed Item
Uses Software
Cites Work
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Minimizing finite sums with the stochastic average gradient
- A high-performance, portable implementation of the MPI message passing interface standard
- Fast global convergence of gradient methods for high-dimensional statistical recovery
- The tail of the hypergeometric distribution
- Introductory lectures on convex optimization. A basic course.
- An optimal randomized incremental gradient method
- On the global and linear convergence of the generalized alternating direction method of multipliers
- On Lower and Upper Bounds for Smooth and Strongly Convex Optimization Problems
- Fast Distributed Gradient Methods
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Convergence Rate of Distributed ADMM Over Networks
- D-ADMM: A Communication-Efficient Distributed Algorithm for Separable Optimization
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Multi-Agent Distributed Optimization via Inexact Consensus ADMM
- A Proximal Gradient Algorithm for Decentralized Composite Optimization
- DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
- Distributed Subgradient Methods for Multi-Agent Optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling
- An <formula formulatype="inline"><tex Notation="TeX">$O(1/k)$</tex> </formula> Gradient Method for Network Resource Allocation Problems
- Communication lower bounds for statistical estimation problems via a distributed data processing inequality
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
This page was built for publication: