Modified Gauss–Newton scheme with worst case guarantees for global performance

From MaRDI portal
Publication:5436914

DOI10.1080/08927020600643812zbMath1136.65051OpenAlexW2983918189MaRDI QIDQ5436914

No author found.

Publication date: 18 January 2008

Published in: Optimization Methods and Software (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1080/08927020600643812




Related Items

An improvement of adaptive cubic regularization method for unconstrained optimization problemsA new adaptive trust-region method for system of nonlinear equationsSmoothing projected Barzilai-Borwein method for constrained non-Lipschitz optimizationProximal methods avoid active strict saddles of weakly convex functionsLocal analysis of a spectral correction for the Gauss-Newton model applied to quadratic residual problemsGlobal convergence of model function based Bregman proximal minimization algorithmsGlobal complexity bound of the inexact Levenberg-Marquardt methodComplexity of the Newton method for set-valued mapsA Proximal Quasi-Newton Trust-Region Method for Nonsmooth Regularized OptimizationSparse solutions of optimal control via Newton method for under-determined systemsA first-order primal-dual algorithm for convex problems with applications to imagingOn a global complexity bound of the Levenberg-marquardt methodNewton-MR: inexact Newton method with minimum residual sub-problem solverRegularized Newton Method with Global \({\boldsymbol{\mathcal{O}(1/{k}^2)}}\) ConvergenceOn the complexity of a stochastic Levenberg-Marquardt methodMajorization-minimization-based Levenberg-Marquardt method for constrained nonlinear least squaresRelax-and-split method for nonconvex inverse problemsA structured diagonal Hessian approximation method with evaluation complexity analysis for nonlinear least squaresSome modifications of Newton's method for solving systems of equationsNonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteriaOn the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimizationA note on the smoothing quadratic regularization method for non-Lipschitz optimizationNonlinear stepsize control algorithms: complexity bounds for first- and second-order optimalityWorst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized modelsNonlinear stepsize control, trust regions and regularizations for unconstrained optimizationA brief survey of methods for solving nonlinear least-squares problemsMinimizing uniformly convex functions by cubic regularization of Newton methodOn the use of iterative methods in cubic regularization for unconstrained optimizationTrust-region and other regularisations of linear least-squares problemsOn the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimizationGlobal complexity bound of the Levenberg–Marquardt methodEfficiency of minimizing compositions of convex functions and smooth mapsStochastic variance-reduced prox-linear algorithms for nonconvex composite optimizationNew versions of Newton method: step-size choice, convergence domain and under-determined equationsUnnamed ItemStochastic Gauss-Newton algorithm with STORM estimators for nonconvex composite optimizationHigh-Order Optimization Methods for Fully Composite ProblemsAdaptive Gauss-Newton method for solving systems of nonlinear equations



Cites Work