Analysis of the maximum magnification by the scaled memoryless DFP updating formula with application to compressive sensing
From MaRDI portal
Publication:2243965
DOI10.1007/s00009-021-01905-3zbMath1480.90251OpenAlexW3210310192MaRDI QIDQ2243965
Zohre Aminifard, Saman Babaie-Kafaki
Publication date: 11 November 2021
Published in: Mediterranean Journal of Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00009-021-01905-3
Methods of quasi-Newton type (90C53) Numerical methods based on nonlinear programming (49M37) Image processing (compression, reconstruction, etc.) in information and communication theory (94A08)
Related Items (1)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- A modified scaling parameter for the memoryless BFGS updating formula
- Globally convergent three-term conjugate gradient methods that use secant conditions and generate descent search directions for unconstrained optimization
- A new generalized shrinkage conjugate gradient method for sparse recovery
- On optimality of the parameters of self-scaling memoryless quasi-Newton updating formulae
- Proximal quasi-Newton methods for nondifferentiable convex optimization
- Globally convergent BFGS method for nonsmooth convex optimization
- The revised DFP algorithm without exact line search
- Robust regression: Asymptotics, conjectures and Monte Carlo
- An augmented memoryless BFGS method based on a modified secant equation with application to compressed sensing
- An optimal parameter choice for the Dai-Liao family of conjugate gradient methods by avoiding a direction of the maximum magnification by the search direction matrix
- Broad echo state network for multivariate time series prediction
- A modified descent Polak-Ribiére-Polyak conjugate gradient method with global convergence property for nonconvex functions
- A modified scaled memoryless BFGS preconditioned conjugate gradient method for unconstrained optimization
- Optimization theory and methods. Nonlinear programming
- Large sparse signal recovery by conjugate gradient algorithm based on smoothing technique
- Fixed-Point Continuation Applied to Compressed Sensing: Implementation and Numerical Experiments
- NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
- A CLASS OF DFP ALGORITHMS WITH REVISED SEARCH DIRECTION
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Algorithm 851
- From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
- Two-Point Step Size Gradient Methods
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Self-Scaling Variable Metric (SSVM) Algorithms
- Optimal conditioning of self-scaling variable Metric algorithms
- A Globally and Superlinearly Convergent Algorithm for Nonsmooth Convex Minimization
- Convergence Properties of the DFP Algorithm for Unconstrained Optimization
- MATRIX ANALYSES ON THE DAI–LIAO CONJUGATE GRADIENT METHOD
- A hybrid scaling parameter for the scaled memoryless BFGS method based on the ℓ∞ matrix norm
- A restart scheme for the Dai–Liao conjugate gradient method by ignoring a direction of maximum magnification by the search direction matrix
- CUTEr and SifDec
- Benchmarking optimization software with performance profiles.
- Convergence of the DFP algorithm without exact line search
This page was built for publication: Analysis of the maximum magnification by the scaled memoryless DFP updating formula with application to compressive sensing