A concave optimization-based approach for sparse multiobjective programming (Q2174899)

From MaRDI portal
scientific article
Language Label Description Also known as
English
A concave optimization-based approach for sparse multiobjective programming
scientific article

    Statements

    A concave optimization-based approach for sparse multiobjective programming (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    27 April 2020
    0 references
    This paper deals with sparse multiple objective programming problems of the form \[\min\limits_{x\in X} F\left( x\right) =\left( f_{1}\left(x\right) ,\dots,f_{m-1}\left( x\right) ,\left\Vert x\right\Vert _{0}\right), \tag{\(SMOP\)} \] where \(m\geq 2,\) \(X\subset \mathbb{R}^{n}\) is a compact convex set, \(f_{1},\dots,f_{m-1}\) are differentiable, and \(\left\Vert x\right\Vert _{0}\) is the number of non-zero components of \(x\). In order to avoid the computational difficulties caused by the \(m\)-th objective function \(\left\Vert \cdot \right\Vert _{0}\), the authors replace it by some approximate concave differentiable function \(f_{m}\), giving rise to a smooth approximating problem \[ \begin{array}{cc} \min_{x\in X} & F_{s}\left( x\right) =\left( f_{1}\left( x\right),\dots,f_{m-1}\left( x\right) ,f_{m}\left( x\right) \right).\end{array}\tag{1} \] Regarding the numerical treatment of smooth problems like (1), if \(x^{\ast }\)\ is Pareto optimal (also called an efficient solution), then it is Pareto stationary, i.e., for all \(y\in X\), there exists \(j\in \left\{1,\dots,m\right\} \) such that \(\nabla f_{j}\left( x^{\ast }\right) ^{T}\left(y-x^{\ast }\right) \geq 0\) (the converse holds under certain assumptions). So, when this necessary condition fails at a feasible point \(x,\) then there exists \(y\in X\) such that \(v:=y-x\) is a descent direction for \(f_{j}\) at \(x\). In fact, a steepest descent direction for \(F_{s}\) at \(x\) can be found by solving the problem \[ \begin{array}{ll} \min\limits_{y\in X,\beta \in \mathbb{R}} & \beta \\ \text{s.t.} & \nabla f_{i}\left( x\right) ^{T}\left( y-x\right) \leq \beta, \end{array} \] which is a linear program whenever \(X\) is a polyhedron. The novelty of the paper consists in approaching the hard problem \((SMOP)\) by a sequence of smooth multiobjective problems like (1). To do this, the authors reformulate (1) as \[ \begin{array}{ll} \min\limits_{x\in X,y\in \mathbb{R}^{n}} & \left( f_{1}\left( x\right) ,\dots,f_{m-1}\left( x\right) ,\left\Vert y\right\Vert _{0}\right) , \\ \text{s.t.} & -y_{i}\leq x_{i}\leq y_{i},i=1,\dots,n, \end{array} \] and consider approximating problems of the form \[ \begin{array}{ll} \min\limits_{x\in X,y\in \mathbb{R}^{n}} & \left( f_{1}\left( x\right) ,\dots,f_{m-1}\left( x\right) ,\sum\limits_{i=1}^{m}f^{u}\left( y\right) \right), \\ \text{s.t.} & -y_{i}\leq x_{i}\leq y_{i},\;i=1,\dots,n, \end{array} \] where \(f^{u}:\mathbb{R}_{+}\longrightarrow \mathbb{R}\) is certain smooth function depending on a parameter \(u\in U\subset \mathbb{R}.\) The case when \(X\) is a polyhedron is studied in detail. Numerical experiments on~a classical application in portfolio selection are reported.
    0 references
    multiple objective programming
    0 references
    sparse optimization
    0 references
    concave programming
    0 references
    multiobjective steepest descent methods
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers