Newton-like Method with Diagonal Correction for Distributed Optimization
From MaRDI portal
Publication:5275293
DOI10.1137/15M1038049zbMath1371.90100arXiv1509.01703OpenAlexW2963172825MaRDI QIDQ5275293
Dragana Bajović, Nataša Krklec Jerinkić, Dušan Jakovetić, Nataša Krejić
Publication date: 11 July 2017
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1509.01703
Numerical mathematical programming methods (65K05) Convex programming (90C25) Methods of quasi-Newton type (90C53)
Related Items
Linear convergence rate analysis of a class of exact first-order distributed methods for weight-balanced time-varying networks and uncoordinated step sizes ⋮ A split Levenberg-Marquardt method for large-scale sparse problems ⋮ Distributed optimization over directed graphs with row stochasticity and constraint regularity ⋮ Exact spectral-like gradient method for distributed optimization
Cites Work
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Distributed stochastic subgradient projection algorithms for convex optimization
- Distributed multi-agent optimization with state-dependent communication
- Sample size selection in optimization methods for machine learning
- Block diagonally dominant matrices and generalizations of the Gerschgorin circle theorem
- Line search methods with variable sample size for unconstrained optimization
- Nonmonotone line search methods with variable sample size
- Newton-like method with modification of the right-hand-side vector
- Global Convergence of Online Limited Memory BFGS
- On the Convergence of Decentralized Gradient Descent
- Hybrid Deterministic-Stochastic Methods for Data Fitting
- Newton-Raphson Consensus for Distributed Convex Optimization
- Distributed Optimization With Local Domains: Applications in MPC and Network Flows
- Fast Distributed Gradient Methods
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- Inexact Newton Methods
- Consensus in Ad Hoc WSNs With Noisy Links—Part I: Distributed Estimation of Deterministic Signals
- Diffusion LMS Strategies for Distributed Estimation
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- RES: Regularized Stochastic BFGS Algorithm
- Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
- Distributed Gradient Methods with Variable Number of Working Nodes
- DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers
- Distributed Subgradient Methods for Multi-Agent Optimization
- A Primal-Dual Quasi-Newton Method for Exact Consensus Optimization
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization
- Distributed Parameter Estimation in Sensor Networks: Nonlinear Observation Models and Imperfect Communication
- A Distributed Newton Method for Network Utility Maximization–I: Algorithm