Scalable subspace methods for derivative-free nonlinear least-squares optimization
From MaRDI portal
Publication:6038650
Abstract: We introduce a general framework for large-scale model-based derivative-free optimization based on iterative minimization within random subspaces. We present a probabilistic worst-case complexity analysis for our method, where in particular we prove high-probability bounds on the number of iterations before a given optimality is achieved. This framework is specialized to nonlinear least-squares problems, with a model-based framework based on the Gauss-Newton method. This method achieves scalability by constructing local linear interpolation models to approximate the Jacobian, and computes new steps at each iteration in a subspace with user-determined dimension. We then describe a practical implementation of this framework, which we call DFBGN. We outline efficient techniques for selecting the interpolation points and search subspace, yielding an implementation that has a low per-iteration linear algebra cost (linear in the problem dimension) while also achieving fast objective decrease as measured by evaluations. Extensive numerical results demonstrate that DFBGN has improved scalability, yielding strong performance on large-scale nonlinear least-squares problems.
Cites work
- scientific article; zbMATH DE number 5544465 (Why is no real title available?)
- scientific article; zbMATH DE number 6026126 (Why is no real title available?)
- scientific article; zbMATH DE number 961607 (Why is no real title available?)
- A Derivative-Free Method for Structured Optimization Problems
- A derivative-free Gauss-Newton method
- A derivative-free algorithm for least-squares minimization
- A globally convergent algorithm for nonconvex optimization based on block coordinate update
- A randomized nonmonotone block proximal gradient method for a class of structured nonlinear programming
- A stochastic subspace approach to gradient-free optimization in high dimensions
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- Advances and Trends in Optimization with Engineering Applications
- An investigation of Newton-sketch and subsampled Newton methods
- Bayesian optimization in a billion dimensions via random embeddings
- Benchmarking Derivative-Free Optimization Algorithms
- Benchmarking optimization software with performance profiles.
- Block stochastic gradient iteration for convex and nonconvex optimization
- Bound-constrained global optimization of functions with low effective dimensionality using multiple random embeddings
- CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization
- Complexity and global rates of trust-region methods based on probabilistic models
- Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization
- Connected components in random graphs with given expected degree sequences
- Convergence of trust-region methods based on probabilistic models
- Coordinate descent algorithms
- Derivative-free optimization methods
- Detection and Remediation of Stagnation in the Nelder--Mead Algorithm Using a Sufficient Decrease Condition
- Direct search based on probabilistic descent
- Direct search based on probabilistic feasible descent for bound and linearly constrained problems
- Efficient random coordinate descent algorithms for large-scale structured nonconvex optimization
- Error bounds for overdetermined and underdetermined generalized centred simplex gradients
- Finding structure with randomness: probabilistic algorithms for constructing approximate matrix decompositions
- Geometry of interpolation sets in derivative free optimization
- Global convergence of general derivative-free trust-region algorithms to first- and second-order critical points
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Improving the flexibility and robustness of model-based derivative-free optimization solvers
- Inexact Block Coordinate Descent Algorithms for Nonsmooth Nonconvex Optimization
- Inexact derivative-free optimization for bilevel learning
- Introduction to Derivative-Free Optimization
- Least Frobenius norm updating of quadratic models that satisfy interpolation conditions
- Levenberg-Marquardt methods based on probabilistic gradient models and inexact subproblem solution, with application to data assimilation
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- On trust region methods for unconstrained minimization without derivatives
- Optimization by moving ridge functions: derivative-free optimization for computationally intensive functions
- Optimizing partially separable functions without derivatives
- Parallel Selective Algorithms for Nonconvex Big Data Optimization
- Random gradient-free minimization of convex functions
- Randomized Algorithms for Matrices and Data
- Sharp nonasymptotic bounds on the norm of random matrices with independent entries
- Sketching as a tool for numerical linear algebra
- Sparser Johnson-Lindenstrauss transforms
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic optimization using a trust-region method and random models
- Stochastic quasi-gradient methods: variance reduction via Jacobian sketching
- Stochastic three points method for unconstrained smooth minimization
- Sub-sampled Newton methods
- Trust Region Methods
- Trust-region methods without using derivatives: worst case complexity and the nonsmooth case
- Two decades of blackbox optimization applications
- Worst case complexity of direct search
Cited in
(13)- A Subspace Decomposition Principle for Scaled Gradient Projection Methods: Local Theory
- Adaptive state-dependent diffusion for derivative-free optimization
- Global optimization using random embeddings
- Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses
- The substitution secant/finite difference method for large scale sparse unconstrained optimization
- On the global complexity of a derivative-free Levenberg-Marquardt algorithm via orthogonal spherical smoothing
- Expected decrease for derivative-free algorithms using random subspaces
- A stochastic subspace approach to gradient-free optimization in high dimensions
- Direct Search Based on Probabilistic Descent in Reduced Spaces
- \(Q\)-fully quadratic modeling and its application in a random subspace derivative-free method
- Quadratic regularization methods with finite-difference gradient approximations
- Stochastic zeroth order descent with structured directions
- Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization
This page was built for publication: Scalable subspace methods for derivative-free nonlinear least-squares optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6038650)