HOGWILD
From MaRDI portal
Software:40110
swMATH28396MaRDI QIDQ40110FDOQ40110
Author name not available (Why is that?)
Cited In (71)
- Avoiding Communication in Primal and Dual Block Coordinate Descent Methods
- Title not available (Why is that?)
- Title not available (Why is that?)
- Distributed stochastic inertial-accelerated methods with delayed derivatives for nonconvex problems
- Redundancy techniques for straggler mitigation in distributed optimization and learning
- Title not available (Why is that?)
- A class of smooth exact penalty function methods for optimization problems with orthogonality constraints
- Randomized Kaczmarz with averaging
- Collaborative filtering for massive multinomial data
- Parallel coordinate descent methods for big data optimization
- Vampire with a brain is a good ITP hammer
- On the parallelization upper bound for asynchronous stochastic gradients descent in non-convex optimization
- A class of parallel doubly stochastic algorithms for large-scale learning
- Asynchronous parallel primal-dual block coordinate update methods for affinely constrained convex programs
- An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
- An asynchronous distributed and scalable generalized Nash equilibrium seeking algorithm for strongly monotone games
- ARock: an algorithmic framework for asynchronous parallel coordinate updates
- Towards easier and faster sequence labeling for natural language processing: a search-based probabilistic online learning framework (SAPO)
- Distributed matrix completion and robust factorization
- Weighted SGD for \(\ell_p\) regression with randomized preconditioning
- An accelerated communication-efficient primal-dual optimization framework for structured machine learning
- Optimization methods for large-scale machine learning
- Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization
- On unbounded delays in asynchronous parallel fixed-point algorithms
- Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations
- A Continuous-Time Analysis of Distributed Stochastic Gradient
- Asynchronous stochastic coordinate descent: parallelism and convergence properties
- Matrix completion under interval uncertainty
- Deep convolutional neural networks for image classification: a comprehensive review
- A distributed flexible delay-tolerant proximal gradient algorithm
- Stochastic reformulations of linear systems: algorithms and convergence theory
- Robust asynchronous stochastic gradient-push: asymptotically optimal and network-independent performance for strongly convex functions
- Title not available (Why is that?)
- Numerical methods for the resource allocation problem in a computer network
- Dynamic assortment personalization in high dimensions
- Properties of vector embeddings in social networks
- A sparse completely positive relaxation of the modularity maximization for community detection
- Consensus-based modeling using distributed feature construction with ILP
- Asynchronous parallel algorithms for nonconvex optimization
- On the convergence of asynchronous parallel iteration with unbounded delays
- Performance analysis of asynchronous parallel Jacobi
- Fast and reliable parameter estimation from nonlinear observations
- Eigenvector Computation and Community Detection in Asynchronous Gossip Models
- Likelihood Inference for Large Scale Stochastic Blockmodels With Covariates Based on a Divide-and-Conquer Parallelizable Algorithm With Communication
- Perturbed iterate analysis for asynchronous stochastic optimization
- Global convergence rate of proximal incremental aggregated gradient methods
- Online learning in optical tomography: a stochastic approach
- \textsc{OCam}: out-of-core coordinate descent algorithm for matrix completion
- Efficient inference and learning in a large knowledge base. Reasoning with extracted information using a locally groundable first-order probabilistic logic
- Sampling Strategies for Fast Updating of Gaussian Markov Random Fields
- EGC: entropy-based gradient compression for distributed deep learning
- Improved asynchronous parallel optimization analysis for stochastic incremental methods
- Coordinate descent algorithms
- Practical matrix completion and corruption recovery using proximal alternating robust subspace minimization
- Distributed stochastic optimization with large delays
- A block successive upper-bound minimization method of multipliers for linearly constrained convex optimization
- Distributed and robust support vector machine
- On the convergence analysis of asynchronous SGD for solving consistent linear systems
- Variance reduction for root-finding problems
- Revisiting EXTRA for Smooth Distributed Optimization
- Zipline: an optimized algorithm for the elastic bulk synchronous parallel model
- What can be sampled locally?
- Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
- Comment: A brief survey of the current state of play for Bayesian computation in data science at big-data scale
- Block delayed Majorize-Minimize subspace algorithm for large scale image restoration *
- Sample size selection in optimization methods for machine learning
- Title not available (Why is that?)
- Parallelizable Algorithms for Optimization Problems with Orthogonality Constraints
- DSCOVR: randomized primal-dual block coordinate algorithms for asynchronous distributed optimization
- A general distributed dual coordinate optimization framework for regularized loss minimization
- An Optimal Algorithm for Decentralized Finite-Sum Optimization
This page was built for software: HOGWILD