Descent and penalization techniques for equilibrium problems with nonlinear constraints (Q2342130)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Descent and penalization techniques for equilibrium problems with nonlinear constraints |
scientific article |
Statements
Descent and penalization techniques for equilibrium problems with nonlinear constraints (English)
0 references
11 May 2015
0 references
The paper considers equilibrium problems of the form \[ \text{Find }x^{\ast }\in C\text{ such that }f\left( x^{\ast },y\right) \geq 0\text{ }\forall y\in C,\tag{EP} \] where \(C\subset \mathbb{R}^{n}\) is the solutions set of a given convex system satisfying Slater's condition\ and \(f:\mathbb{R}^{n}\times \mathbb{R} ^{n}\rightarrow \mathbb{R}.\) Moreover, it is assumed that \(C\subset D\), where \(D\) is a polytope and all the involved functions are smooth. The equilibrium problem (EP) is reformulated as an optimization problem appealing to a (gap) function \(g:C\rightarrow \mathbb{R}_{+}\) such that \(x\in C\) if and only if \(g\left( x\right) =0.\) The authors consider gap functions of the form \[ g\left( x\right) =-\inf \left\{ f\left( x,y\right) +\alpha h\left( x,y\right) :y\in P\left( x\right) \right\} , \] where \(P\left( x\right) \subset D\) is a suitable polyhedral approximation of \(C\), \(\alpha >0\) is a fixed regularization parameter and \(h:\mathbb{R} ^{n}\times \mathbb{R}^{n}\rightarrow \mathbb{R}\) is an auxiliary function such that: {\parindent=6mm \begin{itemize} \item[-] \(h\left( x,y\right) \geq 0\) \(\forall x,y\in D\) and \(h\left( z,z\right) =0\) \(\forall z\in D\). \item [-] \(h\left( x,\cdot \right) \) is strictly convex \(\forall x\in D\). \item [-] \(\nabla _{y}h\left( z,z\right) =0\) \(\forall z\in D\). \item [-] \(\left\langle \nabla _{x}h\left( x,y\right) +\nabla _{y}h\left( x,y\right) ,y-x\right\rangle \geq 0\) \(\forall x,y\in D\). \end{itemize}} The paper proposes two descent methods for the global minimization of \(g\) that solve at each iteration a convex optimization subproblem with linear constraints. Theorems 2 and 3 prove that these algorithms either provide solutions of (EP) after finitely many iterations or generate bounded sequences contained in \(D\) whose cluster points are solutions of (EP). The theoretical advantages of these methods w.r.t. the competing ones are that the convex subproblems are linearly constrained, the convergence is guaranteed under weak assumptions, and the regularization parameter \(\alpha \) remains fixed (making \(\alpha \) tend to zero provokes numerical instability). The relative computational efficiency of these promising algorithms is still to be confirmed in practice.
0 references
equilibrium problems
0 references
gap functions
0 references
descent methods
0 references
0 references
0 references
0 references
0 references
0 references
0 references
0 references