
Table 8: Counting Results with Different Frame Rates.
Frame Rate D
t
= 150 D
t
= 250 D
t
= 300
60 fps 1,213 1,153 1,158
30 fps 1,618 1,190 1,178
15 fps 1,270 721 834
12 fps 995 650 487
an overestimation of the count. At lower frame rates,
fewer fish are captured in the images, as the frame rate
becomes too low to record their presence effectively.
A similar trend was observed for D
T
= 250 and
D
T
= 300. At 60fps, all three threshold values pro-
vided satisfactory results. However, at 30fps, it was
found that DT must be set to 250 or 300 to achieve
reliable results. The conditions for D
T
= 150 at 60fps
and D
T
= 300 at 30fps can be considered nearly
equivalent, and indeed, the fish counting results were
almost identical under these settings.
As stated in Section 1, real-time processing is re-
quired in aquaculture settings, making lower frame
rates more desirable. In such cases, it is necessary
to consider the movement speed of the fish and set an
appropriate D
T
value.
5 SUMMARY
In this paper, we proposed a method for counting fast-
swimming fish to apply in aquaculture settings. Since
real-time counting is considered, we employed sim-
ple techniques, but the method has achieved sufficient
accuracy. Future challenges include conducting de-
tailed evaluations in different environments and with
various fish species and developing a real-time sys-
tem.
ACKNOWLEDGEMENTS
We would like to thank the staff of the Oshima Hatch-
ery at the Kindai University Aquaculture Technology
and Production Center for their helpful support. And,
this work was supported by MEXT KAKENHI Grant
Numbers JP21H05302 and JP23K11158.
REFERENCES
A. I. Dell, J. A. Bender, K. B. I. D. C. G. G. d. P. L. P.
J. J. N. A. P.-E. P. P. A. D. S. M. W. and Brose, U.
(2014). Automated image-based tracking and its ap-
plication in ecology. Trends in Ecology & Evolution,
29(7):417–428.
Bewley, A., Ge, Z., Ott, L., Ramos, F., and Upcroft, B.
(2016). Simple online and realtime tracking. pages
3464–3468.
Cao, J., Pang, J., Weng, X., Khirodkar, R., and Kitani, K.
(2023). Observation-centric sort: Rethinking sort for
robust multi-object tracking. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition, pages 9686–9696.
Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov,
A., and Zagoruyko, S. (2020). End-to-end object de-
tection with transformers. In Vedaldi, A., Bischof, H.,
Brox, T., and Frahm, J.-M., editors, Computer Vision –
ECCV 2020, volume 12346 of Lecture Notes in Com-
puter Science, pages 213–229. Springer, Cham.
FAO (2024). In Brief to The State of World Fisheries and
Aquaculture 2024: Blue Transformation in Action.
Li, X., Zhuang, Y., You, B., Wang, Z., Zhao, J., Gao,
Y., and Xiao, D. (2024). Ldnet: High accuracy
fish counting framework using limited training sam-
ples with density map generation network. Journal
of King Saud University - Computer and Information
Sciences, 36(7):102143.
Lilibeth Coronel, W. B. and Namoco, C. (1970). Identifica-
tion of an efficient filtering- segmentation technique
for automated counting. The International Arab Jour-
nal of Information Technology (IAJIT), 15(04):76 –
82.
Mathis, A., Mamidanna, P., Cury, K. M., Abe, T., Murthy,
V. N., Mathis, M. W., and Bethge, M. (2018).
Deeplabcut: markerless pose estimation of user-
defined body parts with deep learning. Nature Neu-
roscience, 21:1281–1289.
of Japan, F. A. (2022). Fisheries of japan—FY2022
(2021/2023).
Pereira, T. D., Tabris, N., Matsliah, A., Turner, D. M., Li, J.-
P., Ravindranath, S., Papadoyannis, E. S., Normand,
E., Deutsch, D. S., Wang, Z. Y., McKenzie-Smith,
G. C., Mitelut, C. C., Castro, L. A., D’Uva, J., Kislin,
M., Sanes, J. R., Kocher, S. D., Murthy, M., and Shae-
vitz, J. W. (2022). Sleap: A deep learning system for
multi-animal pose tracking. Nature Methods, 19:486–
495.
P
´
erez-Escudero, A., Vicente-Page, J., Hinz, R., Arganda,
S., and Polavieja, G. (2014). Idtracker: Tracking in-
dividuals in a group by automatic identification of un-
marked animals. Nature methods, 11.
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A.
(2016). You only look once: Unified, real-time object
detection. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR).
Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster
r-cnn: Towards real-time object detection with re-
gion proposal networks. In Cortes, C., Lawrence,
N., Lee, D., Sugiyama, M., and Garnett, R., editors,
Advances in Neural Information Processing Systems,
volume 28. Curran Associates, Inc.
Romero-Ferrero, F., Bergomi, M. G., Hinz, R. C., Heras,
F. J., and de Polavieja, G. G. (2019). idtracker.ai:
tracking all individuals in small or large collectives of
unmarked animals. Nature Methods, 16:179–182.
VISAPP 2025 - 20th International Conference on Computer Vision Theory and Applications
788