Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems (Q1184339): Difference between revisions

From MaRDI portal
Import240304020342 (talk | contribs)
Set profile property.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: An iterative scheme for variational inequalities / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4729635 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Generalized Descent Methods for Asymmetric Systems of Equations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Finite-dimensional variational inequality and nonlinear complementarity problems: A survey of theory, algorithms and applications / rank
 
Normal rank
Property / cites work
 
Property / cites work: The gap function of a convex program / rank
 
Normal rank
Property / cites work
 
Property / cites work: AN ITERATIVE METHOD FOR VARIATIONAL INEQUALITIES WITH APPLICATION TO TRAFFIC EQUILIBRIUM PROBLEMS / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3905599 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A note on a globally convergent Newton method for solving monotone variational inequalities / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5652137 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Iterative methods for variational and complementarity problems / rank
 
Normal rank

Latest revision as of 15:04, 15 May 2024

scientific article
Language Label Description Also known as
English
Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems
scientific article

    Statements

    Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems (English)
    0 references
    0 references
    28 June 1992
    0 references
    The variational inequality problem is (1): Find \(x^*\in S\subset\mathbb{R}^ n\) such that \(\langle F(x^*),x-x^*\rangle\geq 0\), \(\forall x\in\mathbb{R}^ n\) where \(S\neq\emptyset\) is closed and convex and \(F: \mathbb{R}^ n\to\mathbb{R}^ n\). Now with \(G\) being any \(n\times n\) positive definite matrix consider the program (2): \(\min\{\phi(y): y\in S\}\) where \(\phi(y)=\langle F(x),(y-x)\rangle+{1\over 2}\langle(y-x),G(y-x)\rangle\), and let \(-f(x): \mathbb{R}^ n\to\mathbb{R}\) be the optimal objective value of \(\phi(y)\) in (2). The main result is now: (i) \(f(x)\geq 0\), \(\forall x\in S\), and (ii) \(x^*\) solves (1) if and only if it solves the program (3): \(\min\{f(x): x\in S\}\) and that happens if and only if \(f(x^*)=0\), \(x^*\in S\). Moreover \(f\) is continuously differentiable (continuous) if \(F\) is continuously differentiable (continuous). In the first case descent methods are presented to solve the program (3) by an iteration process. A list of sixteen references closes the paper.
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references