The effect of deterministic noise in subgradient methods (Q1960191): Difference between revisions
From MaRDI portal
Set profile property. |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1007/s10107-008-0262-5 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2172228337 / rank | |||
Normal rank |
Revision as of 19:40, 19 March 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | The effect of deterministic noise in subgradient methods |
scientific article |
Statements
The effect of deterministic noise in subgradient methods (English)
0 references
13 October 2010
0 references
In the paper the authors consider the problem \[ \min_{x\in X} \; f(x) \] where \(f:\mathbb R^n\to \mathbb R\) is a convex function, and \(X\) is a nonempty, closed and convex set in \(R^n,\) in order to focus on the influence of noise on subgradient methods. In particular, they consider an approximate \(\epsilon\)-subgradient method where the \(\epsilon\)-subgradients are computed inexactly; this method is given by \[ x_{k+1}={\mathcal P}_X[x_k-\alpha_k\tilde{g}_k], \] where \({\mathcal P}_X\) denotes the projection on the set \(X.\) The vector \(x_0\) is an initial iterate from the set \(X\) (i.e., \(x_0\in X\)) and the scalar \(\alpha_k\) is a positive stepsize. The vector \(\tilde{g}_k\) is an approximate subgradient of the following form \[ \tilde{g}_k=g_k+r_k, \] where \(r_k\) is a noise vector and \(g_k\) is an \(\epsilon_k\)-subgradient of \(f\) at \(x_k\) for some \(\epsilon_k\geq 0.\) Different stepsize rules are considered; in particular, the convergence properties of the method are studied using the following: constant stepsize rule, diminishing stepsize rule and dynamic stepsize rule. The paper is organized as follows: In Section 2, the authors give the convergence properties of the method for a compact constraint set \(X,\) and, in Section 3, they discuss the convergence properties of the method for the case when the objective function \(f\) has a set of sharp minima. As a special case, their results show that, with \(\epsilon_k\equiv 0\) and the diminishing stepsize rule, the method converges to the optimal value even if the noise is nonvanishing but is instead small enough. In both cases, using different stepsize rules, they prove convergence to the optimal value whithin some tolerance that is given explicitly in terms of the errors. In the first case, the tolerance is nonzero, but in the second case, the optimal value can be obtained exactly, provided the size of the error in the subgradient computation is below the threshold. In Section 4, they consider an objective function \(f\) that is the sum of a large number of convex functions, in which case an incremental subgradient method can also be used. They give analogs of the results of Sections 2 and 3 for incremental subgradient methods.
0 references
convex constrained optimization
0 references
\(\epsilon\)-subgradient methods
0 references
noise
0 references