Lower bounds for polynomials using geometric programming (Q2910880)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Lower bounds for polynomials using geometric programming |
scientific article; zbMATH DE number 6081232
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Lower bounds for polynomials using geometric programming |
scientific article; zbMATH DE number 6081232 |
Statements
12 September 2012
0 references
sum of squares
0 references
geometric programing
0 references
optimization
0 references
Lower bounds for polynomials using geometric programming (English)
0 references
The authors introduce in theorem 2.3 a new sufficient condition for a polynomial \(f\in \mathbb{R}[\underline{X}]=\mathbb{R}[X_1,\dots, X_n]\) to be a sum of squares (sos), generalizing and improving their previous results in [Arch. Math. 95, No. 4, 343--353 (2010; Zbl 1205.12001)] which in turn improved results of \textit{J. B. Lasserre} [SIAM J. Optim. 11, No. 3, 796--817 (2001; Zbl 1010.90061)] and \textit{C. Fidalgo} and \textit{A. Kovačec} [Math. Z. 269, No. 3--4, 629--645 (2011; Zbl 1271.11045)]. The most important contribution is that their versatile theorem allows them to introduce geometric programming which yields lower bounds \(f_{\mathrm{gp}}\) for \(f_{\mathrm{sos}}=\sup\{r: f-r \text{ is sos}\},\) and hence for \(f_*=\inf\{f(\underline{a}):\underline{a} \in \mathbb{R}^n\},\) which are better than the lower bounds \(r_{\mathrm{L}},r_{\mathrm{FK}}, r_{\mathrm{dmt}},\) given in their previous paper.NEWLINENEWLINEAfter giving in Theorem 2.1 and Corollary 2.2 interesting new proofs of a result of Hurwitz-Reznick and the consequence thereof given by Fidalgo-Kovačec [loc. cit.] concerning the representability of positive semidefinite forms of the type \(\sum_{i=1}^n \beta_i X_i^{2d} - \mu \underline{X}^\alpha \) with \(|\alpha|=2d,\) \(\beta_i\geq 0,\) and \(\underline{X}^\alpha=X_1^{\alpha_1} \cdots X_n^{\alpha_n},\) as sums of squares of binomials, they prove their first main result, Theorem 2.3.NEWLINENEWLINENEWLINEWrite a polynomial \(f\) as \(f=f_0+\sum_{\alpha \in \Omega} f_\alpha \underline{X}^\alpha +\sum_{i=1}^n f_{2d,i} X_i^{2d},\) where \(\Omega=\Omega(f)= \{\alpha\in \mathbb{N}^n: f_\alpha \neq 0 \}\setminus \{\underline{0},2de_1,\dots ,2de_n\}\) with standard vectors \(e_i,\) and let \(\Delta(f)=\{\alpha \in \Omega(f): f_\alpha \underline{X}^\alpha\) is not a square in \(\mathbb{R}[\underline{X}]\}.\)NEWLINENEWLINENEWLINETheorem 2.3: Let \(f\) be a form of degree \(2d\). A sufficient condition for \(f\) to be a sum of (binomial) squares (sobs) is that there exists for every \(\alpha\in \Delta\) a nonnegative real \(n\)-tuple \(a_\alpha=(a_{\alpha,1},\dots, a_{\alpha,n})\) so thatNEWLINENEWLINE(i) For all \(\alpha \in \Delta\), NEWLINE\[NEWLINE(2d)^{2d} a_{\alpha}^\alpha =f_\alpha^{2d}\alpha^\alpha,NEWLINE\]NEWLINENEWLINENEWLINENEWLINE\[NEWLINEf_{2d,i} \geq \sum_{\alpha \in \Delta} a_{\alpha,i},\quad i=1,\dots ,n. \tag{ii}NEWLINE\]NEWLINENEWLINENEWLINENEWLINEBy suitable choices of the \(a_\alpha,\) they get in corollaries 2.5, 2.6, 2.7 and 2.9 simplified sufficient conditions for sobs-ness of forms and inhomogeneous polynomials partially obtained by the aforementioned authors, but also new results. Example 2.10 shows that Theorem 2.3 is strictly stronger than these corollaries.NEWLINENEWLINENEWLINEFor section 3, given \(f\in \mathbb{R}[\underline{X}]\), note that since \(f-f_{\mathrm{sos}}\) is sos, and hence \(\geq 0\) and \(f_*\) the largest real so that \(f-f_*\geq 0,\) we have \(f_{\mathrm{sos}}\leq f_*.\) By obtaining lower bounds for \(f_{\mathrm{sos}}\), the authors get lower bounds for \(f_*\) and hence for \(f\).NEWLINENEWLINENEWLINEView Theorem 2.3 as a criterion \(\Phi(\underline{f},f_0)\) in terms of the coefficients \(\underline{f}\) of a polynomial \(f\) in order to be sos. Then for \(f_\Phi:=\sup\{r: \Phi(\underline{f}, f_0-r)\},\) we get \(f_\Phi \leq f_{\mathrm{sos}} \leq f_*.\) Note that in criterion \(\Phi\) there enter \(a_\alpha,\) \(\alpha \in \Delta\) as parameters with which one can play. These observations lead to the formulation of a geometric program aiming at good bounds \(f_{\mathrm{gp}}\) for \(f_{\mathrm{sos}}\). The author's \(f_{\mathrm{gp}}=\sup\{f_\Phi:\) there exists \(a_\alpha\) with \(\alpha\in \Delta\) so that \(\Phi( \underline{f}, f_0-f_\Phi).\}\)NEWLINENEWLINENEWLINEAlthough the authors report to have found sometimes large gaps between \(f_{\mathrm{gp}}\) and \(f_{\mathrm{sos}}\) they prove gapfreeness in the case \(|\Omega|=1\) and show that some other examples are gapfree as well. Runtime experiments comparing \textsc{SosTools} and \textsc{GPposy} show that the running time for geometric programming can be dramatically shorter than for semidefinite programming, even in gapfree examples.NEWLINENEWLINEAs mentioned, in their paper cited above, the authors have already given lower bounds for \(f_{\mathrm{sos}},\) called there, \(r_L, r_{\mathrm{FK}},\) and \(r_{\mathrm{dmt}}.\) It turns out that the formulae given there are intimately connected with particular choices of the \(a_\alpha\) in the objective function of the geometric program, and hence all weaker than than \(f_{\mathrm{gp}},\) that is \(\max\{r_L, r_{\mathrm{FK}},r_{\mathrm{dmt}}\} \leq f_{\mathrm{gp}}\) for any \(f,\) where typically the inequality is strict.
0 references