Global solutions to nonconvex problems by evolution of Hamilton-Jacobi PDEs
From MaRDI portal
Publication:6575280
DOI10.1007/S42967-022-00239-5zbMATH Open1543.65092MaRDI QIDQ6575280FDOQ6575280
Authors: Howard Heaton, Samy Wu Fung, Stanley Osher
Publication date: 19 July 2024
Published in: Communications on Applied Mathematics and Computation (Search for Journal in Brave)
Recommendations
- Algorithm for overcoming the curse of dimensionality for certain non-convex Hamilton-Jacobi equations, projections and differential games
- Recent Theoretical Advances in Non-Convex Optimization
- A computational approach to non-smooth optimization by diffusion equations
- Why Do Local Methods Solve Nonconvex Problems?
Numerical optimization and variational techniques (65K10) Nonlinear programming (90C30) Numerical methods for partial differential equations, initial value and time-dependent initial-boundary value problems (65M99)
Cites Work
- Optimization by simulated annealing
- Differential evolution -- a simple and efficient heuristic for global optimization over continuous spaces
- First-order methods in optimization
- Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations
- Convex Analysis
- Lipschitzian optimization without the Lipschitz constant
- Weighted essentially non-oscillatory schemes
- Differential properties of the Moreau envelope
- Two approximations of solutions of Hamilton-Jacobi equations
- Title not available (Why is that?)
- Title not available (Why is that?)
- Proximité et dualité dans un espace hilbertien
- Algorithm 909: NOMAD: nonlinear optimization with the MADS algorithm
- Benchmarking Derivative-Free Optimization Algorithms
- Convex analysis and monotone operator theory in Hilbert spaces
- High-Order Essentially Nonoscillatory Schemes for Hamilton–Jacobi Equations
- The Elements of Statistical Learning
- Title not available (Why is that?)
- Envelopes and nonconvex Hamilton-Jacobi equations
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- Deep relaxation: partial differential equations for optimizing deep neural networks
- Derivative-free optimization methods
- Entropy-SGD: biasing gradient descent into wide valleys
- Subgradient methods for sharp weakly convex functions
- Stochastic model-based minimization of weakly convex functions
- A stochastic subspace approach to gradient-free optimization in high dimensions
- Zeroth-order optimization with orthogonal random directions
- A one-bit, comparison-based gradient estimator
- Zeroth-order regularized optimization (ZORO): approximately sparse gradients and adaptive sampling
- On the numerical performance of finite-difference-based methods for derivative-free optimization
- Adaptive Finite-Difference Interval Estimation for Noisy Derivative-Free Optimization
- Escaping strict saddle points of the Moreau envelope in nonsmooth optimization
This page was built for publication: Global solutions to nonconvex problems by evolution of Hamilton-Jacobi PDEs
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6575280)