Random gradient-free minimization of convex functions (Q2397749)

From MaRDI portal
scientific article
Language Label Description Also known as
English
Random gradient-free minimization of convex functions
scientific article

    Statements

    Random gradient-free minimization of convex functions (English)
    0 references
    0 references
    0 references
    23 May 2017
    0 references
    Let \(E\) be an \(n\)-dimensional space. The Gaussian approximation of \(f:E\rightarrow \mathbb{R}\) is defined as \(f_{\mu }(x)=\frac{1}{\kappa } \int\limits_{E}f(x+\mu u)e^{-\frac{1}{2}\left\| x\right\| ^{2}}du\), with \(\kappa =\int\limits_{E}e^{-\frac{1}{2}\left\| x\right\| ^{2}}du\). Among other properties of this approximation, the authors prove that, if \(f\) is convex and \(L_{0}(f)\) Lipschitz continuous for any \(x\in E\) and \(\mu \geq 0\) one has \(\nabla f_{\mu }(x)\in \partial _{\epsilon }f(x)\), with \(\epsilon =\mu L_{0}(f)n^{\frac{1}{2}}\). Three random gradient-free oracles, \(g_{\mu }\), \(\widehat{g}_{\mu }\) and \(g_{0}\), are introduced;\ for a random vector \(u\in E\) having Gaussian distribution with correlation operator \(B^{-1}\), they return \(g_{\mu}(x)=\frac{f(x+\mu u)-f(x)}{\mu }Bu\), \(\widehat{g}_{\mu}(x)=\frac{f(x+\mu u)-f(x-\mu u)}{2\mu}Bu\) and \(g_{0}(x)=f^{\prime}(x,u)Bu\), respectively. For the expected values of the squared norms of these oracles upper bounds are provided. The use of the oracles for minimizing convex functions on closed convex sets, both in the nonsmooth and the smooth cases, is shown, and their speeds of convergence are compared with that of the subgradient method. An accelerated version of one of the methods is presented for the case when the objective function is strongly convex and \(C^{1,1}\). The application of the methods to the unconstrained minimization of nonconvex functions is briefly studied, and its rate of convergence to a stationary point is estimated. Computational experiments comparing the performance of the proposed methods with standard gradient methods are presented.
    0 references
    0 references
    0 references
    0 references
    0 references
    convex functions
    0 references
    optimization
    0 references
    derivative-free methods
    0 references
    random methods
    0 references
    complexity bounds
    0 references
    stochastic optimization
    0 references
    0 references