Convergence of a generalized subgradient method for nondifferentiable convex optimization (Q757242): Difference between revisions
From MaRDI portal
ReferenceBot (talk | contribs) Changed an Item |
Set OpenAlex properties. |
||
Property / full work available at URL | |||
Property / full work available at URL: https://doi.org/10.1007/bf01594925 / rank | |||
Normal rank | |||
Property / OpenAlex ID | |||
Property / OpenAlex ID: W2038295970 / rank | |||
Normal rank |
Latest revision as of 08:54, 30 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Convergence of a generalized subgradient method for nondifferentiable convex optimization |
scientific article |
Statements
Convergence of a generalized subgradient method for nondifferentiable convex optimization (English)
0 references
1991
0 references
The generalized subgradient method proposed in this paper for minimizing a convex function f: \(R^ n\to R\) generates a sequence of points by an iteration of the form \[ x_{k+1}=x_ k-\sum_{i\in I_ k}s^ i_ kg_ i, \] where \(I_ k\subset \{1,...,k\}\), \(g_ i\in \partial f(x_ i)\) and \(s^ i_ k\geq 0\). Observing that this iteration can be written as \(x_{k+1}=x_ k-s_ kd_ k\), with appropriate \(s_ k\geq 0\) and \(d_ k\in \partial_{\epsilon_ k}f(x_ k)\) for some \(\epsilon_ k\geq 0\), the authors derive some convergence conditions from known convergence theorems for the \(\epsilon\)-subgradient method.
0 references
nondifferentiable optimization
0 references
generalized subgradient method
0 references
convergence conditions
0 references
\(\epsilon \) -subgradient
0 references