A derivative-free comirror algorithm for convex optimization
From MaRDI portal
Publication:3458813
DOI10.1080/10556788.2014.968158zbMath1328.90104arXiv1210.6403OpenAlexW2162546528MaRDI QIDQ3458813
Walaa M. Moursi, Heinz H. Bauschke, Warren L. Hare
Publication date: 28 December 2015
Published in: Unnamed Author (Search for Journal in Brave)
Abstract: We consider $min{f(x):g(x) le 0, ~xin X},$ where $X$ is a compact convex subset of $RR^m$, and $f$ and $g$ are continuous convex functions defined on an open neighbourhood of $X$. We work in the setting of derivative-free optimization, assuming that $f$ and $g$ are available through a black-box that provides only function values for a lower-$mathcal{C}^2$ representation of the functions. We present a derivative-free optimization variant of the $eps$-comirror algorithm cite{BBTGBT2010}. Algorithmic convergence hinges on the ability to accurately approximate subgradients of lower-$mathcal{C}^2$ functions, which we prove is possible through linear interpolation. We provide convergence analysis that quantifies the difference between the function values of the iterates and the optimal function value. We find that the DFO algorithm we develop has the same convergence result as the original gradient-based algorithm. We present some numerical testing that demonstrate the practical feasibility of the algorithm, and conclude with some directions for further research.
Full work available at URL: https://arxiv.org/abs/1210.6403
convex optimizationderivative-free optimizationapproximate subgradientBregman distancelower-\(\mathcal{C}^2\)non-Euclidean projected subgradient
Convex programming (90C25) Derivative-free methods and methods using generalized derivatives (90C56) Numerical optimization and variational techniques (65K10)
Cites Work
- Unnamed Item
- A derivative-free approximate gradient sampling algorithm for finite minimax problems
- CONDOR, a new parallel, constrained extension of Powell's UOBYQA algorithm: Experimental results and comparison with the DFO algorithm
- The CoMirror algorithm for solving nonsmooth constrained convex problems
- Worst case complexity of direct search
- Objective-derivative-free methods for constrained optimization
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- UOBYQA: unconstrained optimization by quadratic approximation
- Smoothed analysis of \(\kappa(A)\)
- Geometry of interpolation sets in derivative free optimization
- The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography
- Smoothing and worst-case complexity for direct-search methods in nonsmooth optimization
- Sequential Penalty Derivative-Free Methods for Nonlinear Constrained Optimization
- An active-set trust-region method for derivative-free nonlinear bound-constrained optimization
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- OrthoMADS: A Deterministic MADS Instance with Orthogonal Directions
- Introduction to Derivative-Free Optimization
- Variational Analysis
- Derivative-free optimization methods for finite minimax problems
- Benchmarking Derivative-Free Optimization Algorithms
- A Derivative-Free Algorithm for Linearly Constrained Finite Minimax Problems
- Mesh Adaptive Direct Search Algorithms for Constrained Optimization
- Convex Analysis
Related Items (4)
Compositions of convex functions and fully linear models ⋮ Limiting behaviour of the generalized simplex gradient as the number of points tends to infinity on a fixed shape in \(\mathrm{IR}^n\) ⋮ Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods ⋮ Derivative-free optimization methods
This page was built for publication: A derivative-free comirror algorithm for convex optimization