A stochastic Levenberg-Marquardt method using random models with complexity results
From MaRDI portal
Publication:5075237
Abstract: Globally convergent variants of the Gauss-Newton algorithm are often the methods of choice to tackle nonlinear least-squares problems. Among such frameworks, Levenberg-Marquardt and trust-region methods are two well-established, similar paradigms. Both schemes have been studied when the Gauss-Newton model is replaced by a random model that is only accurate with a given probability. Trust-region schemes have also been applied to problems where the objective value is subject to noise: this setting is of particular interest in fields such as data assimilation, where efficient methods that can adapt to noise are needed to account for the intrinsic uncertainty in the input data. In this paper, we describe a stochastic Levenberg-Marquardt algorithm that handles noisy objective function values and random models, provided sufficient accuracy is achieved in probability. Our method relies on a specific scaling of the regularization parameter, that allows us to leverage existing results for trust-region algorithms. Moreover, we exploit the structure of our objective through the use of a family of stationarity criteria tailored to least-squares problems. Provided the probability of accurate function estimates and models is sufficiently large, we bound the expected number of iterations needed to reach an approximate stationary point, which generalizes results based on using deterministic models or noiseless function values. We illustrate the link between our approach and several applications related to inverse problems and machine learning.
Recommendations
- Levenberg-Marquardt methods based on probabilistic gradient models and inexact subproblem solution, with application to data assimilation
- Convergence and complexity analysis of a Levenberg-Marquardt algorithm for inverse problems
- A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients
- Levenberg-Marquardt method based on probabilistic Jacobian models for nonlinear equations
- Stochastic optimization using a trust-region method and random models
Cites work
- scientific article; zbMATH DE number 3885030 (Why is no real title available?)
- scientific article; zbMATH DE number 3579922 (Why is no real title available?)
- scientific article; zbMATH DE number 2154345 (Why is no real title available?)
- A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients
- A Nonmonotone Matrix-Free Algorithm for Nonlinear Equality-Constrained Least-Squares Problems
- A derivative-free Gauss-Newton method
- A derivative-free algorithm for least-squares minimization
- A method for the solution of certain non-linear problems in least squares
- A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
- Adaptive regularisation for ensemble Kalman inversion
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- Complexity and global rates of trust-region methods based on probabilistic models
- Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization
- Convergence and complexity analysis of a Levenberg-Marquardt algorithm for inverse problems
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- Convergence of trust-region methods based on probabilistic models
- Derivative-free optimization methods
- Ensemble Kalman methods for inverse problems
- Global complexity bound of the Levenberg-Marquardt method
- Improving the flexibility and robustness of model-based derivative-free optimization solvers
- Introduction to Derivative-Free Optimization
- Inverse Problem Theory and Methods for Model Parameter Estimation
- Levenberg-Marquardt methods based on probabilistic gradient models and inexact subproblem solution, with application to data assimilation
- Nonlinear least squares — the Levenberg algorithm revisited
- On the evaluation complexity of cubic regularization methods for potentially rank-deficient nonlinear least-squares problems and its relevance to constrained nonlinear optimization
- On the local convergence of a derivative-free algorithm for least-squares minimization
- Optimization methods for large-scale machine learning
- Probability and stochastics.
- Stochastic derivative-free optimization using a trust region framework
- Stochastic optimization using a trust-region method and random models
- The ensemble Kalman filter for combined state and parameter estimation
- Tikhonov regularization within ensemble Kalman inversion
- Trust-region methods without using derivatives: worst case complexity and the nonsmooth case
Cited in
(11)- A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients
- On the complexity of a stochastic Levenberg-Marquardt method
- Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses
- Levenberg-Marquardt methods based on probabilistic gradient models and inexact subproblem solution, with application to data assimilation
- Convergence analysis of a subsampled Levenberg-Marquardt algorithm
- Assessing stochastic algorithms for large scale nonlinear least squares problems using extremal probabilities of linear combinations of gamma random variables
- Complexity analysis of regularization methods for implicitly constrained least squares
- A fully stochastic second-order trust region method
- TREGO: a trust-region framework for efficient global optimization
- A stochastic iteratively regularized Gauss-Newton method
- Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
This page was built for publication: A stochastic Levenberg-Marquardt method using random models with complexity results
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5075237)