Efficiency of higher-order algorithms for minimizing composite functions
From MaRDI portal
Publication:6155068
Abstract: Composite minimization involves a collection of functions which are aggregated in a nonsmooth manner. It covers, as a particular case, optimization problems with functional constraints, minimization of max-type functions, and simple composite minimization problems, where the objective function has a nonsmooth component. We design a higher-order majorization algorithmic framework for fully composite problems (possibly nonconvex). Our framework replaces each component with a higher-order surrogate such that the corresponding error function has a higher-order Lipschitz continuous derivative. We present convergence guarantees for our method for composite optimization problems with (non)convex and (non)smooth objective function. In particular, we prove stationary point convergence guarantees for general nonconvex (possibly nonsmooth) problems and under Kurdyka-Lojasiewicz (KL) property of the objective function we derive improved rates depending on the KL parameter. For convex (possibly nonsmooth) problems we also provide sublinear convergence rates.
Recommendations
- High-order optimization methods for fully composite problems
- Convergence analysis of stochastic higher-order majorization–minimization algorithms
- Efficiency of minimizing compositions of convex functions and smooth maps
- Local properties of algorithms for minimizing nonsmooth composite functions
- The evaluation complexity of finding high-order minimizers of nonconvex optimization
Cites work
- scientific article; zbMATH DE number 3737398 (Why is no real title available?)
- scientific article; zbMATH DE number 729680 (Why is no real title available?)
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- A model algorithm for composite nondifferentiable optimization problems
- Clarke Subgradients of Stratifiable Functions
- Conditions for convergence of trust region algorithms for nonsmooth optimization
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- Cubic regularization of Newton method and its global performance
- Efficiency of minimizing compositions of convex functions and smooth maps
- Gradient methods for minimizing composite functions
- High-order optimization methods for fully composite problems
- Implementable tensor methods in unconstrained convex optimization
- Incremental majorization-minimization optimization with application to large-scale machine learning
- Inexact basic tensor methods for some classes of convex optimization problems
- Linear convergence of first order methods for non-strongly convex optimization
- Majorizing Functions and Convergence of the Gauss–Newton Method for Convex Composite Optimization
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming
- On the use of third-order models with fourth-order regularization for unconstrained optimization
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Smooth minimization of non-smooth functions
- Tensor methods for minimizing convex functions with Hölder continuous higher-order derivatives
- Testing Unconstrained Optimization Software
- The multiproximal linearization method for convex composite problems
- The value function approach to convergence analysis in composite optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
This page was built for publication: Efficiency of higher-order algorithms for minimizing composite functions
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6155068)