Active learning for saddle point calculation
From MaRDI portal
Publication:2103465
DOI10.1007/S10915-022-02040-1zbMATH Open1503.62066arXiv2108.04698OpenAlexW4308532668MaRDI QIDQ2103465FDOQ2103465
Authors: Yanyan Li
Publication date: 13 December 2022
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Abstract: The saddle point (SP) calculation is a grand challenge for computationally intensive energy function in computational chemistry area, where the saddle point may represent the transition state (TS). The traditional methods need to evaluate the gradients of the energy function at a very large number of locations. To reduce the number of expensive computations of the true gradients, we propose an active learning framework consisting of a statistical surrogate model, Gaussian process regression (GPR) for the energy function, and a single-walker dynamics method, gentle accent dynamics (GAD), for the saddle-type transition states. SP is detected by the GAD applied to the GPR surrogate for the gradient vector and the Hessian matrix. Our key ingredient for efficiency improvements is an active learning method which sequentially designs the most informative locations and takes evaluations of the original model at these locations to train GPR. We formulate this active learning task as the optimal experimental design problem and propose a very efficient sample-based sub-optimal criterion to construct the optimal locations. We show that the new method significantly decreases the required number of energy or force evaluations of the original model.
Full work available at URL: https://arxiv.org/abs/2108.04698
Recommendations
- Efficient sampling of saddle points with the minimum-mode following method
- Active Learning for Enumerating Local Minima Based on Gaussian Process Derivatives
- Minimum energy path calculations with Gaussian process regression
- An iterative minimization formulation for saddle point search
- A new approach to find a saddle point efficiently based on the Davidson method
Nonparametric regression and quantile regression (62G08) Optimal statistical designs (62K05) Sequential statistical design (62L05) Algorithms for approximation of functions (65D15)
Cites Work
- Elements of Information Theory
- Title not available (Why is that?)
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- Title not available (Why is that?)
- Active Learning
- The gentlest ascent dynamics
- Advanced Lectures on Machine Learning
- An Iterative Minimization Formulation for Saddle Point Search
- Iterative minimization algorithm for efficient calculations of transition states
- Multiscale gentlest ascent dynamics for saddle point in effective dynamics of slow-fast system
- Global optimization-based dimer method for finding saddle points
- Optimization-based shrinking dimer method for finding transition states
- Gaussian process surrogates for failure detection: a Bayesian experimental design approach
- Simplified gentlest ascent dynamics for saddle points in non-gradient systems
- EXPLICIT ESTIMATION OF DERIVATIVES FROM DATA AND DIFFERENTIAL EQUATIONS BY GAUSSIAN PROCESS REGRESSION
Cited In (3)
Uses Software
This page was built for publication: Active learning for saddle point calculation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2103465)