A penalization-gradient algorithm for variational inequalities (Q554800): Difference between revisions
From MaRDI portal
Set OpenAlex properties. |
ReferenceBot (talk | contribs) Changed an Item |
||
Property / cites work | |||
Property / cites work: Q5737280 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Prox-Penalization and Splitting Methods for Constrained Variational Problems / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q3771116 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q3668369 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Variational Analysis / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q4089219 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q5652137 / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Q3857336 / rank | |||
Normal rank |
Latest revision as of 07:20, 4 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | A penalization-gradient algorithm for variational inequalities |
scientific article |
Statements
A penalization-gradient algorithm for variational inequalities (English)
0 references
22 July 2011
0 references
Summary: This paper is concerned with the study of a penalization-gradient algorithm for solving variational inequalities, namely, find \(\bar x \in C\) such that \(\langle A\bar x, y - \bar x \rangle \geq 0\) for all \(y \in C\), where \(A : H \rightarrow H\) is a single-valued operator, \(C\) is a closed convex set of a real Hilbert space \(H\). Given \(\Psi : H \rightarrow \mathbb R \cup \{ +\infty \}\) which acts as a penalization function with respect to the constraint \(\bar x \in C\), and a penalization parameter \(\beta_k\), we consider an algorithm which alternates a proximal step with respect to \(\partial\Psi\) and a gradient step with respect to \(A\) and reads as \(x_k = (I + \lambda_k\beta_k\partial\Psi)^{-1}(x_{k-1} - \lambda_kAx_{k-1})\). Under mild hypotheses, we obtain weak convergence for an inverse strongly monotone operator and strong convergence for a Lipschitz continuous and strongly monotone operator. Applications to hierarchical minimization and fixed-point problems are also given and the multivalued case is reached by replacing the multivalued operator by its Yosida approximate which is always Lipschitz continuous.
0 references
penalization-gradient algorithm
0 references
variational inequalities
0 references
weak convergence
0 references
inverse strongly monotone operator
0 references
strong convergence
0 references
Lipschitz continuous and strongly monotone operator
0 references
fixed-point problems
0 references
Yosida approximation
0 references