Authors:
Jarren Briscoe
1
;
2
;
Brian Rague
2
;
Kyle Feuz
2
and
Robert Ball
2
Affiliations:
1
Department of Computer Science, Washington State University, Pullman, Washington, U.S.A.
;
2
Department of Computer Science, Weber State University, Ogden, Utah, U.S.A.
Keyword(s):
Neural Networks, Network Pruning, Boolean Abstraction, Explainable AI, XAI, Interpretability.
Abstract:
The inherent intricate topology of a neural network (NN) decreases our understanding of its function and purpose. Neural network abstraction and analysis techniques are designed to increase the comprehensibility of these computing structures. To achieve a more concise and interpretable representation of a NN as a Boolean graph (BG), we introduce the Neural Constantness Heuristic (NCH), Neural Constant Propagation (NCP), shared logic, the Neural Real-Valued Constantness Heuristic (NRVCH), and negligible neural nodes. These techniques reduce a neural layer’s input space and the number of nodes for a problem in NP (reducing its complexity). Additionally, we contrast two parsing methods that translate NNs to BGs: reverse traversal (N ) and forward traversal (F ). For most use cases, the combination of NRVCH, NCP, and N is the best choice.