Online machine learning techniques for Coq: a comparison

From MaRDI portal
Publication:2128797

DOI10.1007/978-3-030-81097-9_5zbMATH Open1485.68288arXiv2104.05207OpenAlexW3186870373MaRDI QIDQ2128797FDOQ2128797


Authors: Liao Zhang, Lasse Blaauwbroek, Bartosz Piotrowski, Prokop Černỳ, Cezary Kaliszyk, Josef Urban Edit this on Wikidata


Publication date: 22 April 2022

Abstract: We present a comparison of several online machine learning techniques for tactical learning and proving in the Coq proof assistant. This work builds on top of Tactician, a plugin for Coq that learns from proofs written by the user to synthesize new proofs. Learning happens in an online manner, meaning that Tactician's machine learning model is updated immediately every time the user performs a step in an interactive proof. This has important advantages compared to the more studied offline learning systems: (1) it provides the user with a seamless, interactive experience with Tactician and, (2) it takes advantage of locality of proof similarity, which means that proofs similar to the current proof are likely to be found close by. We implement two online methods, namely approximate k-nearest neighbors based on locality sensitive hashing forests and random decision forests. Additionally, we conduct experiments with gradient boosted trees in an offline setting using XGBoost. We compare the relative performance of Tactician using these three learning methods on Coq's standard library.


Full work available at URL: https://arxiv.org/abs/2104.05207




Recommendations




Cites Work


Cited In (3)

Uses Software





This page was built for publication: Online machine learning techniques for Coq: a comparison

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2128797)