Random algorithms for convex minimization problems (Q644912)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Random algorithms for convex minimization problems |
scientific article |
Statements
Random algorithms for convex minimization problems (English)
0 references
7 November 2011
0 references
The focus of this paper is on solving a convex minimization problem over a convex set which is the intersection of an arbitrary family of convex inequalities. The primary interest is in the case when the constraint set is not known a priori, but is rather revealed through the random realizations of its components in time. A special case of this problem is the feasibility problem, which arises in many applications including image restoration and tomography. To deal with this problem, the author proposes to use gradient and subgradient iterative methods with random feasibility updates. The algorithms are reminiscent of a learning process where the constraint set is learned while solving the optimization problem at the same time. The convergence properties of the proposed random algorithms are studied for two cases: when the objective function is differentiable with Lipschitz gradients and when the objective function is not differentiable. The gradient and subgradient algorithms are discussed with diminishing and non-diminishing stepsize values. The almost sure convergence to an optimal solution is established for diminishing stepsize. For non-diminishing stepsize, the error bounds are established for the expected distances of the weighted averages of the iterates from the constraint set, as well as for the expected sub-optimality of the function values along the weighted averages.
0 references
convex minimization
0 references
gradient algorithms
0 references
subgradient algorithms
0 references
random algorithms
0 references
error bounds
0 references