Abstraction refinement guided by a learnt probabilistic model

From MaRDI portal
Publication:2828290

DOI10.1145/2837614.2837663zbMATH Open1347.68084arXiv1511.01874OpenAlexW2264244749MaRDI QIDQ2828290FDOQ2828290


Authors: Radu Grigore, Hongseok Yang Edit this on Wikidata


Publication date: 24 October 2016

Published in: Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (Search for Journal in Brave)

Abstract: The core challenge in designing an effective static program analysis is to find a good program abstraction -- one that retains only details relevant to a given query. In this paper, we present a new approach for automatically finding such an abstraction. Our approach uses a pessimistic strategy, which can optionally use guidance from a probabilistic model. Our approach applies to parametric static analyses implemented in Datalog, and is based on counterexample-guided abstraction refinement. For each untried abstraction, our probabilistic model provides a probability of success, while the size of the abstraction provides an estimate of its cost in terms of analysis time. Combining these two metrics, probability and cost, our refinement algorithm picks an optimal abstraction. Our probabilistic model is a variant of the Erdos-Renyi random graph model, and it is tunable by what we call hyperparameters. We present a method to learn good values for these hyperparameters, by observing past runs of the analysis on an existing codebase. We evaluate our approach on an object sensitive pointer analysis for Java programs, with two client analyses (PolySite and Downcast).


Full work available at URL: https://arxiv.org/abs/1511.01874




Recommendations





Cited In (12)





This page was built for publication: Abstraction refinement guided by a learnt probabilistic model

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2828290)