stant Heuristic (NRVCH), the Neural Constant Prop-
agation (NCP), the forward traversal (F ), and the re-
verse traversal (N ).
NCH is functionally equivalent to B (a generic
NN to BG algorithm). NRVCH produces results dif-
ferent from B and produces at most as much sum-
squared error as NCH. Furthermore, NRVCH trans-
lates at least as many nodes to constant values as
NCH. Both heuristics allow some nodes to be calcu-
lated in linear time.
NCP uses constant nodes from previous layers to
reduce the weight space in the current layer. The
propagation technique is better implemented with
NRVCH but can be done with NCH.
F uses its perfect knowledge of the previous layer
to reduce the current layer’s input space via shared
logic and does not complement most B algorithms.
In contrast, N suits many B options and omits neural
nodes or layers entirely.
All things considered, the union of NRVCH, NCP,
and N is often the best choice for computational com-
plexity, conciseness, and accuracy.
8.2 Future Work
Immediately following this paper, research can prove
that NRVCH is viable for a larger set of activation
functions than described here. Moreover, the aver-
age complexity improvement of these heuristics and
traversals should be investigated (given the “average”
neural network (NN)). An approximate complexity
for the general case is likely too broad, and sev-
eral subsets of networks given separate hyperparame-
ters should be considered and specifically addressed.
Consequently, related research can investigate what
neural networks and data sets are most susceptible to
constant neural nodes. Other potential work includes
finding ways to leverage the shared logic found in F
with N .
In broader disciplines, one can incorporate tra-
ditional neural network pruning with the approaches
presented here. Or one could use this work for trans-
fer learning by extracting Boolean logic from two bi-
nary neural networks, combining the logic, then map-
ping the combined logic to a new network.
ACKNOWLEDGEMENTS
This paper is partially funded by the AFRL Research
Grant FA8650-20-F-1956.
REFERENCES
Andrews, R., Diederich, J., and Tickle, A. (1995). Survey
and critique of techniques for extracting rules from
trained artificial neural networks. Knowledge-Based
Systems, 6:373–389.
Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M.,
Hansen, K., and M
¨
uller, K.-R. (2010). How to ex-
plain individual classification decisions. The Journal
of Machine Learning Research, 11:1803–1831.
Brayton, R. and Mishchenko, A. (2010). Abc: An aca-
demic industrial-strength verification tool. volume
6174, pages 24–40.
Briscoe, J. (2021). Comprehending Neural Networks via
Translation to And-Inverter Graphs.
Brudermueller, T., Shung, D., Laine, L., Stanley, A.,
Laursen, S., Dalton, H., Ngu, J., Schultz, M.,
Stegmaier, J., and Krishnaswamy, S. (2020). Making
logic learnable with neural networks.
Chan, H. and Darwiche, A. (2012). Reasoning
about bayesian network classifiers. arXiv preprint
arXiv:1212.2470.
Choi, A., Shi, W., Shih, A., and Darwiche, A. (2017). Com-
piling neural networks into tractable boolean circuits.
intelligence.
Danks, D. and London, A. J. (2017). Regulating au-
tonomous systems: Beyond standards. IEEE Intelli-
gent Systems, 32(1):88–91.
Fiesler, E. (1992). Neural network formalization. Technical
report, IDIAP.
Guidotti, R., Monreale, A., Turini, F., Pedreschi, D., and Gi-
annotti, F. (2018). A survey of methods for explaining
black box models. ACM Computing Surveys, 51.
Han, S., Pool, J., Tran, J., and Dally, W. J. (2015). Learn-
ing both weights and connections for efficient neural
networks. CoRR, abs/1506.02626.
Kingston, J. K. (2016). Artificial intelligence and legal
liability. In International Conference on Innovative
Techniques and Applications of Artificial Intelligence,
pages 269–279. Springer.
Kroll, J. A., Barocas, S., Felten, E. W., Reidenberg, J. R.,
Robinson, D. G., and Yu, H. (2016). Accountable al-
gorithms. U. Pa. L. Rev., 165:633.
Ramachandran, P., Zoph, B., and Le, Q. V. (2017). Search-
ing for activation functions. CoRR, abs/1710.05941.
Shi, W., Shih, A., Darwiche, A., and Choi, A. (2020). On
tractable representations of binary neural networks.
Specialized Neural Network Pruning for Boolean Abstractions
185