Uncertainty quantification for subgradient descent, with applications to relaxations of discrete problems

From MaRDI portal
Publication:6404057

arXiv2207.02078MaRDI QIDQ6404057FDOQ6404057


Authors: Conor McMeel, Panos Parpas Edit this on Wikidata


Publication date: 5 July 2022

Abstract: We consider the problem of minimizing a convex function that depends on an uncertain parameter heta. The uncertainty in the objective function means that the optimum, x(heta), is also a function of heta. We propose an efficient method to compute x(heta) and its statistics. We use a chaos expansion of x(heta) along a truncated basis and study a restarted subgradient method that compute the optimal coefficients. We establish the convergence rate of the method as the number of basis functions increases, and hence the dimensionality of the optimization problem is increased. We give a non-asymptotic convergence rate for subgradient descent, building on earlier work that looked at gradient and accelerated gradient descent. Additionally, this work explicitly deals with the issue of projections, and suggests a method to deal with non-trivial projections. We show how this algorithm can be used to quantify uncertainty in discrete problems by utilising the (convex) Lovasz Extension for the min s,t-cut graph problem.













This page was built for publication: Uncertainty quantification for subgradient descent, with applications to relaxations of discrete problems

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6404057)