A Gauss-Newton method for convex composite optimization

From MaRDI portal
Revision as of 14:31, 1 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1914073

DOI10.1007/BF01585997zbMath0846.90083MaRDI QIDQ1914073

Michael C. Ferris, James V. Burke

Publication date: 2 June 1996

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)




Related Items (57)

Uniform convergence of higher order quasi Hermite-Fejér interpolationOn iterative computation of fixed points and optimizationTwo-square theorems for infinite matrices on certain fieldsOn applications of the calmness moduli for multifunctions to error boundsExtending the applicability of Gauss-Newton method for convex composite optimization on Riemannian manifoldsConvergence analysis of new inertial method for the split common null point problemWeak sharp efficiency in multiobjective optimizationExtending the applicability of the Gauss-Newton method for convex composite optimization using restricted convergence domains and average Lipschitz conditionsSharp minima for multiobjective optimization in Banach spacesNew computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measureComposite proximal bundle methodRobust low transformed multi-rank tensor methods for image alignmentConvergence analysis of the Gauss-Newton-type method for Lipschitz-like mappingsWeak Sharp Minima for Convex Infinite Optimization Problems in Normed Linear SpacesA dynamical system method for solving the split convex feasibility problemGeneralized Kalman smoothing: modeling and algorithmsAn Exact Penalty Method for Nonconvex Problems Covering, in Particular, Nonlinear Programming, Semidefinite Programming, and Second-Order Cone ProgrammingThe equivalence of three types of error bounds for weakly and approximately convex functionsConvergence analysis of a proximal Gauss-Newton methodPseudotransient continuation for solving systems of nonsmooth equations with inequality constraintsEmpirical risk minimization: probabilistic complexity and stepsize strategyGeneralized weak sharp minima in cone-constrained convex optimization with applicationsThe multiproximal linearization method for convex composite problemsLinearly-convergent FISTA variant for composite optimization with dualityRelative regularity conditions and linear regularity properties for split feasibility problems in normed linear spacesRiemannian linearized proximal algorithms for nonnegative inverse eigenvalue problemFaster first-order primal-dual methods for linear programming using restarts and sharpnessConsistent approximations in composite optimizationAsymptotic analysis in convex composite multiobjective optimization problemsLinearized proximal algorithms with adaptive stepsizes for convex composite optimization with applicationsRelax-and-split method for nonconvex inverse problemsThe value function approach to convergence analysis in composite optimizationGauss-Newton method for convex composite optimizations on Riemannian manifoldsComplete characterizations of local weak sharp minima with applications to semi-infinite optimization and complementarityNonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteriaLocal and global behavior for algorithms of solving equationsRelaxed Gauss--Newton Methods with Applications to Electrical Impedance TomographyConvergence analysis of the Gauss-Newton method for convex inclusion and convex-composite optimization problemsExpanding the applicability of the Gauss-Newton method for convex optimization under a majorant conditionLocal convergence analysis of inexact Gauss-Newton method for singular systems of equations under majorant and center-majorant conditionCharacterizing robust weak sharp solution sets of convex optimization problems with uncertaintyA new trust-region method for solving systems of equalities and inequalitiesWeak sharp minima revisited. III: Error bounds for differentiable convex inclusionsCharacterizations of asymptotic cone of the solution set of a composite convex optimization problemModified inexact Levenberg-Marquardt methods for solving nonlinear least squares problemsOn Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with ApplicationsTrust-region quadratic methods for nonlinear systems of mixed equalities and inequalitiesError Bounds, Quadratic Growth, and Linear Convergence of Proximal MethodsStrong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s MethodProximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex ProblemsLow-rank matrix recovery with composite optimization: good conditioning and rapid convergenceStochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and AdaptivityVariable Metric Forward-Backward Algorithm for Composite Minimization ProblemsEfficiency of minimizing compositions of convex functions and smooth mapsStochastic variance-reduced prox-linear algorithms for nonconvex composite optimizationHigh-Order Optimization Methods for Fully Composite ProblemsA Study of Convex Convex-Composite Functions via Infimal Convolution with Applications



Cites Work




This page was built for publication: A Gauss-Newton method for convex composite optimization