A Gauss-Newton method for convex composite optimization

From MaRDI portal
Publication:1914073

DOI10.1007/BF01585997zbMath0846.90083MaRDI QIDQ1914073

Michael C. Ferris, James V. Burke

Publication date: 2 June 1996

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)




Related Items

Uniform convergence of higher order quasi Hermite-Fejér interpolation, On iterative computation of fixed points and optimization, Two-square theorems for infinite matrices on certain fields, On applications of the calmness moduli for multifunctions to error bounds, Extending the applicability of Gauss-Newton method for convex composite optimization on Riemannian manifolds, Convergence analysis of new inertial method for the split common null point problem, Weak sharp efficiency in multiobjective optimization, Extending the applicability of the Gauss-Newton method for convex composite optimization using restricted convergence domains and average Lipschitz conditions, Sharp minima for multiobjective optimization in Banach spaces, New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure, Composite proximal bundle method, Robust low transformed multi-rank tensor methods for image alignment, Convergence analysis of the Gauss-Newton-type method for Lipschitz-like mappings, Weak Sharp Minima for Convex Infinite Optimization Problems in Normed Linear Spaces, A dynamical system method for solving the split convex feasibility problem, Generalized Kalman smoothing: modeling and algorithms, An Exact Penalty Method for Nonconvex Problems Covering, in Particular, Nonlinear Programming, Semidefinite Programming, and Second-Order Cone Programming, The equivalence of three types of error bounds for weakly and approximately convex functions, Convergence analysis of a proximal Gauss-Newton method, Pseudotransient continuation for solving systems of nonsmooth equations with inequality constraints, Empirical risk minimization: probabilistic complexity and stepsize strategy, Generalized weak sharp minima in cone-constrained convex optimization with applications, The multiproximal linearization method for convex composite problems, Linearly-convergent FISTA variant for composite optimization with duality, Relative regularity conditions and linear regularity properties for split feasibility problems in normed linear spaces, Riemannian linearized proximal algorithms for nonnegative inverse eigenvalue problem, Faster first-order primal-dual methods for linear programming using restarts and sharpness, Consistent approximations in composite optimization, Asymptotic analysis in convex composite multiobjective optimization problems, Linearized proximal algorithms with adaptive stepsizes for convex composite optimization with applications, Relax-and-split method for nonconvex inverse problems, The value function approach to convergence analysis in composite optimization, Gauss-Newton method for convex composite optimizations on Riemannian manifolds, Complete characterizations of local weak sharp minima with applications to semi-infinite optimization and complementarity, Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria, Local and global behavior for algorithms of solving equations, Relaxed Gauss--Newton Methods with Applications to Electrical Impedance Tomography, Convergence analysis of the Gauss-Newton method for convex inclusion and convex-composite optimization problems, Expanding the applicability of the Gauss-Newton method for convex optimization under a majorant condition, Local convergence analysis of inexact Gauss-Newton method for singular systems of equations under majorant and center-majorant condition, Characterizing robust weak sharp solution sets of convex optimization problems with uncertainty, A new trust-region method for solving systems of equalities and inequalities, Weak sharp minima revisited. III: Error bounds for differentiable convex inclusions, Characterizations of asymptotic cone of the solution set of a composite convex optimization problem, Modified inexact Levenberg-Marquardt methods for solving nonlinear least squares problems, On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications, Trust-region quadratic methods for nonlinear systems of mixed equalities and inequalities, Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods, Strong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s Method, Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity, Variable Metric Forward-Backward Algorithm for Composite Minimization Problems, Efficiency of minimizing compositions of convex functions and smooth maps, Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization, High-Order Optimization Methods for Fully Composite Problems, A Study of Convex Convex-Composite Functions via Infimal Convolution with Applications



Cites Work