On convergence of the Gauss-Newton method for convex composite optimization.

From MaRDI portal
Publication:5957569

DOI10.1007/s101070100249zbMath1049.90132OpenAlexW2082725151MaRDI QIDQ5957569

Chong Li, Xinghua Wang

Publication date: 2002

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s101070100249




Related Items

On iterative computation of fixed points and optimizationSharp minima for multiobjective optimization in Banach spacesComposite proximal bundle methodConvergence analysis of the Gauss-Newton-type method for Lipschitz-like mappingsThe equivalence of three types of error bounds for weakly and approximately convex functionsConvergence analysis of a proximal Gauss-Newton methodGeneralized weak sharp minima in cone-constrained convex optimization with applicationsThe multiproximal linearization method for convex composite problemsStrong KKT conditions and weak sharp solutions in convex-composite optimizationRiemannian linearized proximal algorithms for nonnegative inverse eigenvalue problemConvergence of the Gauss-Newton method for convex composite optimization problems under majorant condition on Riemannian manifoldsLinearized proximal algorithms with adaptive stepsizes for convex composite optimization with applicationsThe value function approach to convergence analysis in composite optimizationGauss-Newton method for convex composite optimizations on Riemannian manifoldsRelaxed Gauss--Newton Methods with Applications to Electrical Impedance TomographyConvergence analysis of the Gauss-Newton method for convex inclusion and convex-composite optimization problemsExpanding the applicability of the Gauss-Newton method for convex optimization under a majorant conditionModified inexact Levenberg-Marquardt methods for solving nonlinear least squares problemsGeneralized weak sharp minima in cone-constrained convex optimization on Hadamard manifoldsOn Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with ApplicationsStrong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s Method