(a) IoU = 30% (b) IoU = 37%
(c) IoU = 47% (d) IoU = 60%
Figure 18: Example of person detections (ground-truth in
green; eYOLO’s output in red) and actual IoU percentages.
safety system.
There are several ways in which the work of this
paper can be further improved to achieve better de-
tection and runtime performances on low-cost em-
bedded devices. In particular, it would be worth ex-
ploring a detection solution based on centroids rather
than bounding boxes. As already reported in the pa-
per, decreasing the number of parameters is very use-
ful to increase the runtime performance of the net-
work. A CNN detecting only the center of a per-
son could achieve that, and at the same time provide
enough information for safety purposes. Future work
should also include a comparison to other methods
(e.g. SSD, Tiny-YOLO) and adapt the most recent
networks, such as YOLO v7, to embedded systems,
possibly pushing their applications even further on
low-cost/low-energy microcontroller-based devices.
ACKNOWLEDGEMENTS
This work has received funding from the EU H2020
research and innovation programme under grant
agreement No. 101017274 (DARKO).
REFERENCES
Cohen, N., Gattuso, J., and MacLennan-Brown, K. (2009).
CCTV Operational Requirements Manual. UK Home
Office, Scientific Development Branch.
Dalal, N. and Triggs, B. (2005). Histograms of oriented gra-
dients for human detection. In 2005 IEEE Computer
Society Conference on Computer Vision and Pattern
Recognition (CVPR’05), volume 1, pages 886–893.
Farouk Khalifa, A., Badr, E., and Elmahdy, H. N. (2019).
A survey on human detection surveillance systems for
raspberry pi. Image and Vision Computing, 85:1–13.
Girshick, R. (2015). Fast r-cnn. In 2015 IEEE International
Conference on Computer Vision (ICCV), pages 1440–
1448.
Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014).
Rich feature hierarchies for accurate object detection
and semantic segmentation. In 2014 IEEE Conference
on Computer Vision and Pattern Recognition, pages
580–587.
Huang, R., Pedoeem, J., and Chen, C. (2018). Yolo-lite:
A real-time object detection algorithm optimized for
non-gpu computers. In IEEE Int. Conf. on Big Data
(Big Data), pages 2503–2510.
Kim, C. E., Oghaz, M. M. D., Fajtl, J., Argyriou, V., and
Remagnino, P. (2019). A comparison of embedded
deep learning methods for person detection. In Proc.
of the 14th Int. Conf. on Computer Vision Theory and
Applications (VISAPP).
Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L., Ke,
Z., Li, Q., Cheng, M., Nie, W., Li, Y., Zhang, B.,
Liang, Y., Zhou, L., Xu, X., Chu, X., Wei, X., and
Wei, X. (2022). Yolov6: A single-stage object detec-
tion framework for industrial applications.
Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ra-
manan, D., Doll
´
ar, P., and Zitnick, C. L. (2014). Mi-
crosoft coco: Common objects in context. In Fleet,
D., Pajdla, T., Schiele, B., and Tuytelaars, T., edi-
tors, Computer Vision – ECCV 2014, pages 740–755.
Springer International Publishing.
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.,
Fu, C.-Y., and Berg, A. C. (2016). SSD: Single shot
MultiBox detector. In Computer Vision – ECCV 2016,
pages 21–37. Springer International Publishing.
Padilla, R., Netto, S. L., and da Silva, E. A. B. (2020). A
survey on performance metrics for object-detection al-
gorithms. In 2020 International Conference on Sys-
tems, Signals and Image Processing (IWSSIP).
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A.
(2016). You only look once: Unified, real-time object
detection. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR).
Redmon, J. and Farhadi, A. (2017). Yolo9000: better, faster,
stronger. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 7263–
7271.
Redmon, J. and Farhadi, A. (2018). Yolov3: An incremental
improvement. In CoRR, volume abs/1804.02767.
Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster
r-cnn: Towards real-time object detection with re-
VISAPP 2023 - 18th International Conference on Computer Vision Theory and Applications
292