Variable metric random pursuit
From MaRDI portal
Abstract: We consider unconstrained randomized optimization of smooth convex objective functions in the gradient-free setting. We analyze Random Pursuit (RP) algorithms with fixed (F-RP) and variable metric (V-RP). The algorithms only use zeroth-order information about the objective function and compute an approximate solution by repeated optimization over randomly chosen one-dimensional subspaces. The distribution of search directions is dictated by the chosen metric. Variable Metric RP uses novel variants of a randomized zeroth-order Hessian approximation scheme recently introduced by Leventhal and Lewis (D. Leventhal and A. S. Lewis., Optimization 60(3), 329--245, 2011). We here present (i) a refined analysis of the expected single step progress of RP algorithms and their global convergence on (strictly) convex functions and (ii) novel convergence bounds for V-RP on strongly convex functions. We also quantify how well the employed metric needs to match the local geometry of the function in order for the RP algorithms to converge with the best possible rate. Our theoretical results are accompanied by numerical experiments, comparing V-RP with the derivative-free schemes CMA-ES, Implicit Filtering, Nelder-Mead, NEWUOA, Pattern-Search and Nesterov's gradient-free algorithms.
Recommendations
- Optimization of convex functions with random pursuit
- Random gradient-free minimization of convex functions
- Zeroth-order optimization with orthogonal random directions
- Random perturbation of the variable metric method for unconstrained nonsmooth nonconvex optimization
- scientific article; zbMATH DE number 3862941
Cites work
- scientific article; zbMATH DE number 3956832 (Why is no real title available?)
- scientific article; zbMATH DE number 54139 (Why is no real title available?)
- scientific article; zbMATH DE number 3014797 (Why is no real title available?)
- A Family of Variable-Metric Methods Derived by Variational Means
- A Simplex Method for Function Minimization
- A new approach to variable metric algorithms
- Conditioning of Quasi-Newton Methods for Function Minimization
- Convergence Conditions for Ascent Methods
- Convergence Conditions for Ascent Methods. II: Some Corrections
- Implicit filtering
- Lower Bounds for Hit-and-Run Direct Search
- Matrix Analysis
- Matrix tricks for linear statistical models. Our personal top twenty.
- Minimization of functions having Lipschitz continuous first partial derivatives
- On Steepest Descent
- Optimization of Globally Convex Functions
- Optimization of convex functions with random pursuit
- Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles
- Random gradient-free minimization of convex functions
- Randomized Hessian estimation and directional search
- Stochastic optimization in system design
- The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations
- The NEWUOA software for unconstrained optimization without derivatives
- Variable Metric Method for Minimization
- When does the expectation of a ratio equal the ratio of expectations?
Cited in
(6)- Global Linear Convergence of Evolution Strategies on More than Smooth Strongly Convex Functions
- Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains
- Randomized quasi-Newton updates are linearly convergent matrix inversion algorithms
- scientific article; zbMATH DE number 5575640 (Why is no real title available?)
- Optimization of convex functions with random pursuit
- Randomized iterative methods for linear systems
This page was built for publication: Variable metric random pursuit
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q263217)