Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimization
From MaRDI portal
Publication:2088969
DOI10.1007/s12532-022-00219-zzbMath1496.90035OpenAlexW4220813952MaRDI QIDQ2088969
Chih-Jen Lin, Po-Wei Wang, Ching-pei Lee
Publication date: 6 October 2022
Published in: Mathematical Programming Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s12532-022-00219-z
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30) Parallel algorithms in computer science (68W10) Distributed algorithms (68W15)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Gradient methods for minimizing composite functions
- On the convergence of the iterates of the ``fast iterative shrinkage/thresholding algorithm
- Generalized Hessian matrix and second-order optimality conditions for problems with \(C^{1,1}\) data
- On the limited memory BFGS method for large scale optimization
- Introductory lectures on convex optimization. A basic course.
- Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis
- The Rate of Convergence of Nesterov's Accelerated Forward-Backward Method is Actually Faster Than $1/k^2$
- Two-Point Step Size Gradient Methods
- A finite newton method for classification
- Full convergence of the steepest descent method with inexact line searches
- EXTRA: An Exact First-Order Algorithm for Decentralized Consensus Optimization