A framework for parallel second order incremental optimization algorithms for solving partially separable problems
From MaRDI portal
Publication:2419531
DOI10.1007/s10589-018-00057-7zbMath1420.90034arXiv1509.01698OpenAlexW2907820928MaRDI QIDQ2419531
Ş. İlker Birbil, Nurdan Kuru, Umut Şimşekli, M. Kaan Öztürk, A. Taylan Cemgil, Kamer Kaya, Hazal Koptagel, Figen Oztoprak
Publication date: 13 June 2019
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1509.01698
matrix factorizationlarge-scale unconstrained optimizationbalanced coloringsecond-order informationbalanced stratificationshared-memory parallel implementation
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Parallel coordinate descent methods for big data optimization
- Incremental gradient algorithms with stepsizes bounded away from zero
- Representations of quasi-Newton matrices and their use in limited memory methods
- A globally convergent incremental Newton method
- Distributed-Memory Parallel Algorithms for Distance-2 Coloring and Related Problems in Derivative Computation
- ColPack
- Distributed Block Coordinate Descent for Minimizing Partially Separable Functions
- Hierarchical ALS Algorithms for Nonnegative Matrix and 3D Tensor Factorization
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Factor graphs and the sum-product algorithm
- D-ADMM: A Communication-Efficient Distributed Algorithm for Separable Optimization
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
- Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
- Incremental Least Squares Methods and the Extended Kalman Filter
- A Convergent Incremental Gradient Method with a Constant Step Size
- An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
- What Color Is Your Jacobian? Graph Coloring for Computing Derivatives
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
This page was built for publication: A framework for parallel second order incremental optimization algorithms for solving partially separable problems