A globally convergent incremental Newton method
From MaRDI portal
Publication:2349125
DOI10.1007/s10107-015-0897-yzbMath1316.49033arXiv1410.5284OpenAlexW2010315937MaRDI QIDQ2349125
Pablo A. Parrilo, Mert Gürbüzbalaban, Asuman Ozdaglar
Publication date: 19 June 2015
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1410.5284
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30) Newton-type methods (49M15)
Related Items
Sketched Newton--Raphson, Accelerating incremental gradient optimization with curvature information, A framework for parallel second order incremental optimization algorithms for solving partially separable problems, On the linear convergence of the stochastic gradient method with constant step-size, Convergence Rate of Incremental Gradient and Incremental Newton Methods, Splitting proximal with penalization schemes for additive convex hierarchical minimization problems, On the Convergence Rate of Incremental Aggregated Gradient Algorithms
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- New least-square algorithms
- Incremental gradient algorithms with stepsizes bounded away from zero
- The incremental Gauss-Newton algorithm with adaptive stepsize rule
- Incrementally updated gradient methods for constrained and regularized optimization
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Inexact Newton Methods
- A New Class of Incremental Gradient Methods for Least Squares Problems
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- RES: Regularized Stochastic BFGS Algorithm
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- Incremental Least Squares Methods and the Extended Kalman Filter
- Distributed Subgradient Methods for Multi-Agent Optimization
- A Convergent Incremental Gradient Method with a Constant Step Size
- On‐line learning for very large data sets
- A Stochastic Approximation Method
- Inexact perturbed Newton methods and applications to a class of Krylov solvers