REFERENCES
Baur, A., Koch, D., Gatternig, B., and Delgado, A. (2022).
Noninvasive monitoring system for tenebrio molitor
larvae based on image processing with a watershed al-
gorithm and a neural net approach. Journal of Insects
as Food and Feed, pages 1–8.
Cai, Q., Pan, Y., Ngo, C.-W., Tian, X., Duan, L., and Yao,
T. (2019). Exploring object relation in mean teacher
for cross-domain detection. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition, pages 11457–11466.
Csurka, G., Baradel, F., Chidlovskii, B., and Clinchant, S.
(2017). Discrepancy-based networks for unsupervised
domain adaptation: a comparative study. In Proceed-
ings of the IEEE International Conference on Com-
puter Vision Workshops, pages 2630–2636.
Dobermann, D., Swift, J., and Field, L. (2017). Opportu-
nities and hurdles of edible insects for food and feed.
Nutrition Bulletin, 42(4):293–308.
He, K., Gkioxari, G., Dollár, P., and Girshick, R. (2017).
Mask r-cnn. In Proceedings of the IEEE international
conference on computer vision, pages 2961–2969.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and
Hochreiter, S. (2017). Gans trained by a two time-
scale update rule converge to a local nash equilibrium.
Advances in neural information processing systems,
30.
Khodabandeh, M., Vahdat, A., Ranjbar, M., and Macready,
W. G. (2019). A robust learning approach to do-
main adaptive object detection. In Proceedings of the
IEEE/CVF International Conference on Computer Vi-
sion, pages 480–490.
Madadi, Y., Seydi, V., Nasrollahi, K., Hosseini, R., and
Moeslund, T. B. (2020). Deep visual unsupervised
domain adaptation for classification tasks: a survey.
IET Image Processing, 14(14):3283–3299.
Majewski, P., Zapotoczny, P., Lampa, P., Burduk, R., and
Reiner, J. (2022). Multipurpose monitoring system
for edible insect breeding based on machine learning.
Scientific Reports, 12(1):1–15.
Murez, Z., Kolouri, S., Kriegman, D., Ramamoorthi, R.,
and Kim, K. (2018). Image to image translation for
domain adaptation. In Proceedings of the IEEE Con-
ference on Computer Vision and Pattern Recognition,
pages 4500–4509.
Oza, P., Sindagi, V. A., VS, V., and Patel, V. M. (2021).
Unsupervised domain adaptation of object detectors:
A survey. arXiv preprint arXiv:2105.13502.
Padilla, R., Netto, S. L., and Da Silva, E. A. (2020). A sur-
vey on performance metrics for object-detection algo-
rithms. In 2020 international conference on systems,
signals and image processing (IWSSIP), pages 237–
242. IEEE.
Saito, K., Watanabe, K., Ushiku, Y., and Harada, T. (2018).
Maximum classifier discrepancy for unsupervised do-
main adaptation. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 3723–3732.
Shin, I., Woo, S., Pan, F., and Kweon, I. S. (2020).
Two-phase pseudo label densification for self-training
based domain adaptation. In European conference on
computer vision, pages 532–548. Springer.
Sumriddetchkajorn, S., Kamtongdee, C., and Chanhorm, S.
(2015). Fault-tolerant optical-penetration-based silk-
worm gender identification. Computers and Electron-
ics in Agriculture, 119:201–208.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S.,
Anguelov, D., Erhan, D., Vanhoucke, V., and Rabi-
novich, A. (2015). Going deeper with convolutions.
In Proceedings of the IEEE conference on computer
vision and pattern recognition, pages 1–9.
Thrastardottir, R., Olafsdottir, H. T., and Thorarinsdottir,
R. I. (2021). Yellow mealworm and black soldier fly
larvae for feed and food production in europe, with
emphasis on iceland. Foods, 10(11):2744.
Toda, Y., Okura, F., Ito, J., Okada, S., Kinoshita, T., Tsuji,
H., and Saisho, D. (2020). Training instance segmen-
tation neural network with synthetic datasets for crop
seed phenotyping. Communications biology, 3(1):1–
12.
Toldo, M., Maracani, A., Michieli, U., and Zanuttigh, P.
(2020). Unsupervised domain adaptation in semantic
segmentation: a review. Technologies, 8(2):35.
Wu, Y., Kirillov, A., Massa, F., Lo, W.-Y., and Gir-
shick, R. (2019). Detectron2. https://github.com/
facebookresearch/detectron2.
Yoo, D., Kim, N., Park, S., Paek, A. S., and Kweon, I. S.
(2016). Pixel-level domain transfer. In European con-
ference on computer vision, pages 517–532. Springer.
Mixing Augmentation and Knowledge-Based Techniques in Unsupervised Domain Adaptation for Segmentation of Edible Insect States
387