example event data cannot be entered directly. Fur-
thermore, other pruning methods are available in or-
der to reduce energy consumption of ANNs, which
means there is no necessity to enforce the constraints
of the SNN architecture in order to reach this goal.
In addition, the original floating point values must be
approximated by the firing rate during the conversion.
This means, that an enormous latency is required to
achieve a suitable accuracy, which in turn increases
the energy consumption and the inference time ex-
tremely. This makes converted SNNs almost unusable
for complex real world scenarios and direct training of
SNNs should be preferred.
ACKNOWLEDGEMENTS
This work was funded by the Carl Zeiss Stiftung,
Germany under the Sustainable Embedded AI project
(P2021-02-009).
REFERENCES
Adrian, E. D. and Zotterman, Y. (1926). The impulses pro-
duced by sensory nerve endings. The Journal of Phys-
iology.
Andrychowicz, O. M., Baker, B., Chociej, M., J
´
ozefowicz,
R., McGrew, B., Pachocki, J., Petron, A., Plappert,
M., Powell, G., Ray, A., Schneider, J., Sidor, S.,
Tobin, J., Welinder, P., Weng, L., and Zaremba, W.
(2019). Learning dexterous in-hand manipulation.
The International Journal of Robotics Research.
Arrow, C., Wu, H., Baek, S., Iu, H. H., Nazarpour, K.,
and Eshraghian, J. K. (2021). Prosthesis control us-
ing spike rate coding in the retina photoreceptor cells.
In International Symposium on Circuits and Systems
(ISCAS).
Balasubramanian, V. (2021). Brain power. Proceedings of
the National Academy of Sciences.
Bi, G. and Poo, M.-m. (1998). Synaptic modifications in
cultured hippocampal neurons: Dependence on spike
timing, synaptic strength, and postsynaptic cell type.
The Journal of Neuroscience.
Brunel, N. and van Rossum, M. C. W. (2007). Lapicque’s
1907 paper: from frogs to integrate-and-fire. Biologi-
cal Cybernetics.
Chen, X., Ma, H., Wan, J., Li, B., and Xia, T. (2017). Multi-
view 3d object detection network for autonomous
driving. In Conference on Computer Vision and Pat-
tern Recognition (CVPR).
Datta, G., Kundu, S., and Beerel, P. A. (2021). Train-
ing energy-efficient deep spiking neural networks with
single-spike hybrid input encoding. International
Joint Conference on Neural Networks (IJCNN).
Dehaene, S. (2003). The neural basis of the weber–fechner
law: a logarithmic mental number line. Trends in Cog-
nitive Sciences.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-
Fei, L. (2009). Imagenet: A large-scale hierarchical
image database. In Conference on Computer Vision
and Pattern Recognition (CVPR).
Deng, L., Wu, Y., Hu, X., Liang, L., Ding, Y., Li, G., Zhao,
G., Li, P., and Xie, Y. (2019). Rethinking the per-
formance comparison between snns and anns. Neural
Networks.
Diehl, P. U., Neil, D., Binas, J., Cook, M., Liu, S.-C., and
Pfeiffer, M. (2015). Fast-classifying, high-accuracy
spiking deep networks through weight and threshold
balancing. In International Joint Conference on Neu-
ral Networks (IJCNN).
Gallego, G., Delbruck, T., Orchard, G., Bartolozzi, C.,
Taba, B., Censi, A., Leutenegger, S., Davison, A. J.,
Conradt, J., Daniilidis, K., and Scaramuzza, D.
(2022). Event-based vision: A survey. Transac-
tions on Pattern Analysis and Machine Intelligence
(TPAMI).
Gerstner, W. and Kistler, W. M. (2002). Spiking Neu-
ron Models: Single Neurons, Populations, Plasticity.
Cambridge University Press.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Conference on
Computer Vision and Pattern Recognition (CVPR).
Izhikevich, E. (2003). Simple model of spiking neurons.
Transactions on Neural Networks.
Krizhevsky, A. (2009). Learning multiple layers of fea-
tures from tiny images. Technical report, University
of Toronto.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Im-
agenet classification with deep convolutional neural
networks (neurips). In Neural Information Process-
ing Systems.
Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998).
Gradient-based learning applied to document recogni-
tion. Proceedings of the IEEE.
Li, H., Liu, H., Ji, X., Li, G., and Shi, L. (2017). Cifar10-
dvs: an event-stream dataset for object classification.
Frontiers in Neuroscience.
Lillicrap, T. P., Cownden, D., Tweed, D. B., and Akerman,
C. J. (2016). Random synaptic feedback weights sup-
port error backpropagation for deep learning. Nature
Communications.
Maass, W. and Markram, H. (2004). On the computational
power of circuits of spiking neurons. Journal of Com-
puter and System Sciences.
Meng, Q., Xiao, M., Yan, S., Wang, Y., Lin, Z., and Luo,
Z.-Q. (2022). Training high-performance low-latency
spiking neural networks by differentiation on spike
representation.
Morris, R. (1999). D.o. hebb: The organization of behavior,
wiley: New york; 1949. Brain Research Bulletin.
Nunes, J. D., Carvalho, M., Carneiro, D., and Cardoso, J. S.
(2022). Spiking neural networks: A survey. IEEE
Access.
Orchard, G., Jayawant, A., Cohen, G., and Thakor, N.
(2015). Converting static image datasets to spiking
ICPRAM 2023 - 12th International Conference on Pattern Recognition Applications and Methods
472