Amir Taghizadeh VahedChristian W. Omlin
Addresses the extraction of knowledge from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract knowledge from such networks have relied on the hypothesis that network states tend to cluster and that clusters of network states correspond to DFA states. The computational complexity of such a cluster analysis has led to heuristics which either limit the number of clusters that may form during training or limit the exploration of the output space of hidden recurrent state neurons. These limitations, while necessary, may lead to reduced fidelity, i.e. the extracted knowledge may not model the true behavior of a trained network, perhaps not even for the training set. The method proposed uses a polynomial-time symbolic learning algorithm to infer DFAs solely from the observation of a trained network's input/output behavior. Thus, this method has the potential to increase the fidelity of the extracted knowledge.
Rudy SetionoJames Y.L. ThongC. T. Yap
Amir Taghizadeh VahedChristian W. Omlin
Eduardo R. HruschkaNelson F. F. Ebecken
Qinglong WangKaixuan ZhangAlexander G. OrorbiaXinyu XingXue LiuC. Lee Giles