
ACKNOWLEDGEMENTS
This work was supported by the Carl-Zeiss-
Foundation as part of the project BeeVision.
REFERENCES
Benato, B. C., Telea, A. C., and Falc
˜
ao, A. X. (2018). Semi-
supervised learning with interactive label propagation
guided by feature space projections. In 2018 31st SIB-
GRAPI Conference on Graphics, Patterns and Images
(SIBGRAPI), pages 392–399. IEEE.
Bjerge, K., Alison, J., Dyrmann, M., Frigaard, C. E., Mann,
H. M., and Høye, T. T. (2023). Accurate detection and
identification of insects from camera trap images with
deep learning. PLOS Sustainability and Transforma-
tion, 2(3):e0000051.
Bolten, T., Lentzen, F., Pohle-Fr
¨
ohlich, R., and T
¨
onnies,
K. D. (2022). Evaluation of deep learning based
3d-point-cloud processing techniques for semantic
segmentation of neuromorphic vision sensor event-
streams. In VISIGRAPP (4: VISAPP), pages 168–179.
Gallego, G., Delbr
¨
uck, T., Orchard, G., Bartolozzi, C.,
Taba, B., Censi, A., Leutenegger, S., Davison, A. J.,
Conradt, J., Daniilidis, K., et al. (2020). Event-based
vision: A survey. IEEE transactions on pattern anal-
ysis and machine intelligence, 44(1):154–180.
Gebauer, E., Thiele, S., Ouvrard, P., Sicard, A., and Risse,
B. (2024). Towards a dynamic vision sensor-based
insect camera trap. In Proceedings of the IEEE/CVF
Winter Conference on Applications of Computer Vi-
sion, pages 7157–7166.
Gerhardt, E. and Gerhardt, M. (2021). Das große BLV
Handbuch Insekten:
¨
Uber 1360 heimische Arten,
3640 Fotos. GR
¨
AFE UND UNZER.
Han, X.-F., Feng, Z.-A., Sun, S.-J., and Xiao, G.-Q. (2023).
3d point cloud descriptors: state-of-the-art. Artificial
Intelligence Review, 56(10):12033–12083.
Ju, Y., Guo, J., and Liu, S. (2015). A deep learning method
combined sparse autoencoder with svm. In 2015 in-
ternational conference on cyber-enabled distributed
computing and knowledge discovery, pages 257–260.
IEEE.
Konolige, K. (1998). Small vision systems: Hardware and
implementation. In Shirai, Y. and Hirose, S., editors,
Robotics Research, pages 203–212. Springer.
Landmann, T., Schmitt, M., Ekim, B., Villinger, J., Ash-
iono, F., Habel, J. C., and Tonnang, H. E. (2023). In-
sect diversity is a good indicator of biodiversity sta-
tus in africa. Communications Earth & Environment,
4(1):234.
Le Roy, C., Debat, V., and Llaurens, V. (2019). Adaptive
evolution of butterfly wing shape: from morphology
to behaviour. Biological Reviews, 94(4):1261–1281.
Mahadeeswara, M. Y. and Srinivasan, M. V. (2018). Coor-
dinated turning behaviour of loitering honeybees. Sci-
entific reports, 8(1):16942.
Muglikar, M., Gehrig, M., Gehrig, D., and Scaramuzza, D.
(2021). How to calibrate your event camera. In Pro-
ceedings of the IEEE/CVF conference on computer vi-
sion and pattern recognition, pages 1403–1409.
Naqvi, Q., Wolff, P. J., Molano-Flores, B., and Sperry, J. H.
(2022). Camera traps are an effective tool for monitor-
ing insect–plant interactions. Ecology and Evolution,
12(6):e8962.
Pohle-Fr
¨
ohlich, R. and Bolten, T. (2023). Concept study
for dynamic vision sensor based insect monitoring. In
VISIGRAPP (4): VISAPP, pages 411–418.
Pohle-Fr
¨
ohlich, R., Gebler, C., and Bolten, T. (2024).
Stereo-event-camera-technique for insect monitoring.
In VISIGRAPP (3): VISAPP, pages 375–384.
Qi, C. R., Yi, L., Su, H., and Guibas, L. J. (2017). Point-
net++: Deep hierarchical feature learning on point sets
in a metric space. Advances in neural information pro-
cessing systems, 30.
Ren, H., Zhou, Y., Zhu, J., Fu, H., Huang, Y., Lin, X.,
Fang, Y., Ma, F., Yu, H., and Cheng, B. (2024). Re-
thinking efficient and effective point-based networks
for event camera classification and regression: Event-
mamba. arXiv preprint arXiv:2405.06116.
Saleh, M., Ashqar, H. I., Alary, R., Bouchareb, E. M.,
Bouchareb, R., Dizge, N., and Balakrishnan, D.
(2024). Biodiversity for ecosystem services and sus-
tainable development goals. In Biodiversity and Bioe-
conomy, pages 81–110. Elsevier.
Sittinger, M., Uhler, J., Pink, M., and Herz, A. (2024). In-
sect detect: An open-source diy camera trap for auto-
mated insect monitoring. Plos one, 19(4):e0295474.
Tschaikner, M., Brandt, D., Schmidt, H., Bießmann, F.,
Chiaburu, T., Schrimpf, I., Schrimpf, T., Stadel, A.,
Haußer, F., and Beckers, I. (2023). Multisensor data
fusion for automatized insect monitoring (kinsecta).
In Remote Sensing for Agriculture, Ecosystems, and
Hydrology XXV, volume 12727. SPIE.
Van Klink, R., Sheard, J. K., Høye, T. T., Roslin, T.,
Do Nascimento, L. A., and Bauer, S. (2024). To-
wards a toolkit for global insect biodiversity monitor-
ing. Philosophical Transactions of the Royal Society
B, 379(1904):20230101.
Yan, S., Yang, Z., Li, H., Song, C., Guan, L., Kang, H.,
Hua, G., and Huang, Q. (2023). Implicit autoencoder
for point-cloud self-supervised representation learn-
ing. In Proceedings of the IEEE/CVF International
Conference on Computer Vision, pages 14530–14542.
Yang, Y., Feng, C., Shen, Y., and Tian, D. (2018). Fold-
ingnet: Point cloud auto-encoder via deep grid defor-
mation. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 206–
215.
Zhang, R., Guo, Z., Gao, P., Fang, R., Zhao, B., Wang, D.,
Qiao, Y., and Li, H. (2022). Point-m2ae: multi-scale
masked autoencoders for hierarchical point cloud pre-
training. Advances in neural information processing
systems, 35:27061–27074.
VISAPP 2025 - 20th International Conference on Computer Vision Theory and Applications
364