PDFA Distillation with Error Bound Guarantees
More Info
expand_more
Abstract
Active learning algorithms to infer probabilistic finite automata (PFA) have gained interest recently, due to their ability to provide surrogate models for some types of neural networks. However, recent approaches either cannot guarantee determinism, which makes the automaton harder to understand and compute, or they rely on techniques that bound errors on individual transitions. In this work we propose a derivative of the recent L# algorithm to learn deterministic PFA (PDFA) from systems returning a distribution over a set of tokens given an input string. Along with determinism, we can give error bounds on probabilities assigned to whole strings with an easy to understand approach. We show formal correctness of our algorithm and test it on neural networks trained to model three datasets from computer- and network-systems respectively. We show that the algorithm can learn the network’s behaviour closely, and provide an example application of how the model can be used to interpret the network. We note that our approach is in theory applicable in general to learn deterministic weighted finite automata. We provide the source code of our algorithm and relevant scripts on our public repository.
Files
File under embargo until 03-03-2025