A Gauss-Newton method for convex composite optimization
From MaRDI portal
Publication:1914073
DOI10.1007/BF01585997zbMath0846.90083MaRDI QIDQ1914073
Michael C. Ferris, James V. Burke
Publication date: 2 June 1996
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Related Items
Uniform convergence of higher order quasi Hermite-Fejér interpolation, On iterative computation of fixed points and optimization, Two-square theorems for infinite matrices on certain fields, On applications of the calmness moduli for multifunctions to error bounds, Extending the applicability of Gauss-Newton method for convex composite optimization on Riemannian manifolds, Convergence analysis of new inertial method for the split common null point problem, Weak sharp efficiency in multiobjective optimization, Extending the applicability of the Gauss-Newton method for convex composite optimization using restricted convergence domains and average Lipschitz conditions, Sharp minima for multiobjective optimization in Banach spaces, New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure, Composite proximal bundle method, Robust low transformed multi-rank tensor methods for image alignment, Convergence analysis of the Gauss-Newton-type method for Lipschitz-like mappings, Weak Sharp Minima for Convex Infinite Optimization Problems in Normed Linear Spaces, A dynamical system method for solving the split convex feasibility problem, Generalized Kalman smoothing: modeling and algorithms, An Exact Penalty Method for Nonconvex Problems Covering, in Particular, Nonlinear Programming, Semidefinite Programming, and Second-Order Cone Programming, The equivalence of three types of error bounds for weakly and approximately convex functions, Convergence analysis of a proximal Gauss-Newton method, Pseudotransient continuation for solving systems of nonsmooth equations with inequality constraints, Empirical risk minimization: probabilistic complexity and stepsize strategy, Generalized weak sharp minima in cone-constrained convex optimization with applications, The multiproximal linearization method for convex composite problems, Linearly-convergent FISTA variant for composite optimization with duality, Relative regularity conditions and linear regularity properties for split feasibility problems in normed linear spaces, Riemannian linearized proximal algorithms for nonnegative inverse eigenvalue problem, Faster first-order primal-dual methods for linear programming using restarts and sharpness, Consistent approximations in composite optimization, Asymptotic analysis in convex composite multiobjective optimization problems, Linearized proximal algorithms with adaptive stepsizes for convex composite optimization with applications, Relax-and-split method for nonconvex inverse problems, The value function approach to convergence analysis in composite optimization, Gauss-Newton method for convex composite optimizations on Riemannian manifolds, Complete characterizations of local weak sharp minima with applications to semi-infinite optimization and complementarity, Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria, Local and global behavior for algorithms of solving equations, Relaxed Gauss--Newton Methods with Applications to Electrical Impedance Tomography, Convergence analysis of the Gauss-Newton method for convex inclusion and convex-composite optimization problems, Expanding the applicability of the Gauss-Newton method for convex optimization under a majorant condition, Local convergence analysis of inexact Gauss-Newton method for singular systems of equations under majorant and center-majorant condition, Characterizing robust weak sharp solution sets of convex optimization problems with uncertainty, A new trust-region method for solving systems of equalities and inequalities, Weak sharp minima revisited. III: Error bounds for differentiable convex inclusions, Characterizations of asymptotic cone of the solution set of a composite convex optimization problem, Modified inexact Levenberg-Marquardt methods for solving nonlinear least squares problems, On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications, Trust-region quadratic methods for nonlinear systems of mixed equalities and inequalities, Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods, Strong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s Method, Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity, Variable Metric Forward-Backward Algorithm for Composite Minimization Problems, Efficiency of minimizing compositions of convex functions and smooth maps, Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization, High-Order Optimization Methods for Fully Composite Problems, A Study of Convex Convex-Composite Functions via Infimal Convolution with Applications
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Least-norm linear programming solution as an unconstrained minimization problem
- Optimality conditions for non-finite valued convex composite functions
- Stability and regular points of inequality systems
- Strong uniqueness and second order convergence in nonlinear discrete approximation
- Strong uniqueness: A far-reaching criterion for the convergence analysis of iterative procedures
- A Newton-Raphson method for the solution of systems of equations
- The Fritz John necessary optimality conditions in the presence of equality and inequality constraints
- Extension of Newton's method to nonlinear functions with values in a cone
- Weak Sharp Minima in Mathematical Programming
- Strong uniqueness in sequential linear programming
- Local properties of algorithms for minimizing nonsmooth composite functions
- Descent methods for composite nondifferentiable optimization problems
- Normal solutions of linear programs
- First- and Second-Order Epi-Differentiability in Nonlinear Programming
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- Regularity and Stability for Convex Multivalued Functions
- A global quadratic algorithm for solving a system of mixed equalities and inequalities
- An Exact Penalization Viewpoint of Constrained Optimization
- Stability Theory for Systems of Inequalities. Part I: Linear Systems
- The Convergence of the Ben-Israel Iteration for Nonlinear Least Squares Problems
- Stability Theory for Systems of Inequalities, Part II: Differentiable Nonlinear Systems
- A Unified Analysis of Hoffman’s Bound via Fenchel Duality
- Generalized inverse methods for the best least squares solution of systems of non-linear equations
- Convex Analysis
- Normed Convex Processes
- An Embedding Theorem for Spaces of Convex Sets
- A method for the solution of certain non-linear problems in least squares