Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization
From MaRDI portal
Abstract: We introduce and analyze a parallel sequential Monte Carlo methodology for the numerical solution of optimization problems that involve the minimization of a cost function that consists of the sum of many individual components. The proposed scheme is a stochastic zeroth order optimization algorithm which demands only the capability to evaluate small subsets of components of the cost function. It can be depicted as a bank of samplers that generate particle approximations of several sequences of probability measures. These measures are constructed in such a way that they have associated probability density functions whose global maxima coincide with the global minima of the original cost function. The algorithm selects the best performing sampler and uses it to approximate a global minimum of the cost function. We prove analytically that the resulting estimator converges to a global minimum of the cost function almost surely and provide explicit convergence rates in terms of the number of generated Monte Carlo samples and the dimension of the search space. We show, by way of numerical examples, that the algorithm can tackle cost functions with multiple minima or with broad "flat" regions which are hard to minimize using gradient-based techniques.
Recommendations
- On the convergence of two sequential Monte Carlo methods for maximum a posteriori sequence estimation and stochastic global optimization
- Analysis of a sequential Monte Carlo method for optimization in dynamical systems
- Concurrent stochastic methods for global optimization
- Parallel simultaneous perturbation optimization
- Parallel Bayesian global optimization of expensive functions
Cites work
- scientific article; zbMATH DE number 2106098 (Why is no real title available?)
- scientific article; zbMATH DE number 2117879 (Why is no real title available?)
- A Stochastic Approximation Method
- A Survey of Some Model-Based Methods for Global Optimization
- Accelerated Stochastic Algorithms for Nonconvex Finite-Sum and Multiblock Optimization
- Adapting the Number of Particles in Sequential Monte Carlo Methods Through an Online Scheme for Convergence Assessment
- Adaptive subgradient methods for online learning and stochastic optimization
- Analysis of a sequential Monte Carlo method for optimization in dynamical systems
- Introduction to Derivative-Free Optimization
- Maslov Idempotent Probability Calculus, I
- Natural evolution strategies
- Nested particle filters for online parameter estimation in discrete-time state-space Markov models
- Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels
- Nudging the particle filter
- On Accelerated Random Search
- On parallel implementation of sequential Monte Carlo methods: the island particle model
- On the convergence of two sequential Monte Carlo methods for maximum a posteriori sequence estimation and stochastic global optimization
- Optimization by simulated annealing
- Optimization methods for large-scale machine learning
- PRMLT
- Particle Filtering Framework for a Class of Randomized Optimization Algorithms
- Particle-kernel estimation of the filter density in state-space models
- Pattern recognition and machine learning.
- Random gradient-free minimization of convex functions
- Sequential Monte Carlo Samplers
- Sequential Monte Carlo simulated annealing
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic global optimization as a filtering problem
- The landscape of empirical risk for nonconvex losses
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
This page was built for publication: Parallel sequential Monte Carlo for stochastic gradient-free nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2209727)