The effect of deterministic noise in subgradient methods (Q1960191): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
ReferenceBot (talk | contribs)
Changed an Item
(5 intermediate revisions by 4 users not shown)
Property / author
 
Property / author: Angelia Nedić / rank
Normal rank
 
Property / reviewed by
 
Property / reviewed by: Rita Pini / rank
Normal rank
 
Property / author
 
Property / author: Angelia Nedić / rank
 
Normal rank
Property / reviewed by
 
Property / reviewed by: Rita Pini / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1007/s10107-008-0262-5 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2172228337 / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3151174 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4830373 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Weak Sharp Minima in Mathematical Programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5611351 / rank
 
Normal rank
Property / cites work
 
Property / cites work: stochastic quasigradient methods and their application to system optimization<sup>†</sup> / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3818809 / rank
 
Normal rank
Property / cites work
 
Property / cites work: An Incremental Method for Solving Convex Finite Min-Max Problems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence of a simple subgradient level method / rank
 
Normal rank
Property / cites work
 
Property / cites work: Quantized consensus / rank
 
Normal rank
Property / cites work
 
Property / cites work: Decomposition into functions in the minimization problem / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Proximal Bundle Method with Approximate Subgradient Linearizations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2768028 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Incremental Subgradient Methods for Nondifferentiable Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error bounds in mathematical programming / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonlinear programming methods in the presence of noise / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3491338 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error stability properties of generalized gradient-type algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Distributed Average Consensus With Dithered Quantization / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Revision as of 07:30, 3 July 2024

scientific article
Language Label Description Also known as
English
The effect of deterministic noise in subgradient methods
scientific article

    Statements

    The effect of deterministic noise in subgradient methods (English)
    0 references
    0 references
    0 references
    13 October 2010
    0 references
    In the paper the authors consider the problem \[ \min_{x\in X} \; f(x) \] where \(f:\mathbb R^n\to \mathbb R\) is a convex function, and \(X\) is a nonempty, closed and convex set in \(R^n,\) in order to focus on the influence of noise on subgradient methods. In particular, they consider an approximate \(\epsilon\)-subgradient method where the \(\epsilon\)-subgradients are computed inexactly; this method is given by \[ x_{k+1}={\mathcal P}_X[x_k-\alpha_k\tilde{g}_k], \] where \({\mathcal P}_X\) denotes the projection on the set \(X.\) The vector \(x_0\) is an initial iterate from the set \(X\) (i.e., \(x_0\in X\)) and the scalar \(\alpha_k\) is a positive stepsize. The vector \(\tilde{g}_k\) is an approximate subgradient of the following form \[ \tilde{g}_k=g_k+r_k, \] where \(r_k\) is a noise vector and \(g_k\) is an \(\epsilon_k\)-subgradient of \(f\) at \(x_k\) for some \(\epsilon_k\geq 0.\) Different stepsize rules are considered; in particular, the convergence properties of the method are studied using the following: constant stepsize rule, diminishing stepsize rule and dynamic stepsize rule. The paper is organized as follows: In Section 2, the authors give the convergence properties of the method for a compact constraint set \(X,\) and, in Section 3, they discuss the convergence properties of the method for the case when the objective function \(f\) has a set of sharp minima. As a special case, their results show that, with \(\epsilon_k\equiv 0\) and the diminishing stepsize rule, the method converges to the optimal value even if the noise is nonvanishing but is instead small enough. In both cases, using different stepsize rules, they prove convergence to the optimal value whithin some tolerance that is given explicitly in terms of the errors. In the first case, the tolerance is nonzero, but in the second case, the optimal value can be obtained exactly, provided the size of the error in the subgradient computation is below the threshold. In Section 4, they consider an objective function \(f\) that is the sum of a large number of convex functions, in which case an incremental subgradient method can also be used. They give analogs of the results of Sections 2 and 3 for incremental subgradient methods.
    0 references
    convex constrained optimization
    0 references
    \(\epsilon\)-subgradient methods
    0 references
    noise
    0 references
    0 references

    Identifiers