A reduced-space line-search method for unconstrained optimization via random descent directions
DOI10.1016/J.AMC.2018.08.020zbMATH Open1428.90136OpenAlexW2889797563WikidataQ57580374 ScholiaQ57580374MaRDI QIDQ2007776FDOQ2007776
Authors: Carlos Ardila, Jesus Estrada, Jose Capacho, Elias D. Nino Ruiz
Publication date: 22 November 2019
Published in: Applied Mathematics and Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.amc.2018.08.020
Recommendations
- Descent direction method with line search for unconstrained optimization in noisy environment
- Convergence of descent method without line search
- A simple sufficient descent method for unconstrained optimization
- A descent method for unconstrained optimization problems
- A dimension-reducing method for unconstrained optimization
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Newton-type methods (49M15) Numerical methods based on necessary conditions (49M05) Optimality conditions for free problems in two or more independent variables (49K10)
Cites Work
- Computing a Trust Region Step
- Matrix factorizations in optimization of nonlinear functions subject to linear constraints
- Numerical Optimization
- Nonlinear data assimilation
- Updating Quasi-Newton Matrices with Limited Storage
- Function minimization by conjugate gradients
- Conditioning of Quasi-Newton Methods for Function Minimization
- Testing for chaos in deterministic systems with noise
- Updating the Inverse of a Matrix
- Title not available (Why is that?)
- Representations of quasi-Newton matrices and their use in limited memory methods
- Trust Region Methods
- A Nonmonotone Line Search Technique for Newton’s Method
- Numerical computation in science and engineering
- A survey of nonlinear conjugate gradient methods
- Direct search methods: Then and now
- A truncated Newton method with non-monotone line search for unconstrained optimization
- Jacobian-free Newton-Krylov methods: a survey of approaches and applications.
- Title not available (Why is that?)
- Tikhonovs regularization method for ill-posed problems. A comparison of different methods for the determination of the regularization parameter
- The steepest descent direction for the nonlinear bilevel programming problem
- Tikhonov Regularization and Total Least Squares
- Truncated Singular Value Decomposition Solutions to Discrete Ill-Posed Problems with Ill-Determined Numerical Rank
- Convergence of line search methods for unconstrained optimization
- Sparse quasi-Newton updates with positive definite matrix completion
- A matrix-free line-search algorithm for nonconvex optimization
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Error bounds for tikhonov regularization in hilbert scales
- An efficient implementation of the ensemble Kalman filter based on an iterative Sherman-Morrison formula
- A derivative-free trust region framework for variational data assimilation
- Data assimilation: methods, algorithms, and applications
- Computing Truncated Singular Value Decomposition Least Squares Solutions by Rank Revealing QR-Factorizations
- A reduced space branch and bound algorithm for global optimization.
- Line search algorithms for locally Lipschitz functions on Riemannian manifolds
- An ensemble Kalman filter implementation based on modified Cholesky decomposition for inverse covariance matrix estimation
Cited In (3)
Uses Software
This page was built for publication: A reduced-space line-search method for unconstrained optimization via random descent directions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2007776)