
using synthetic data to train object detection systems
with YOLOv9t is a valid approach for overcoming the
challenges of real-world data acquisition. The ob-
tained results show that synthetic data is a feasible
and effective tool, particularly in the context of search
and rescue operations using transfer learning meth-
ods. The performance improvement when exposing
the model to even 10% of real data is notable. Special
attention should be given to the observation that trans-
fer learning with 70% of real data performed better
than models trained on 100% real data. This approach
of using small amounts of real data opens up the pos-
sibility of training models even when real-world data
is sparse, as synthetic data can be generated rapidly
and in large quantities.
The capability of easy virtual dataset generation
can be explored to address the creation of a mas-
sive amount of synthetic data compared to real data.
Higher similarity between synthetic and real image
datasets can also be considered to improve the model,
or studies could focus on increasing the diversity of
synthetic datasets to achieve better generalization for
real-world recognition. Further work can be done by
expanding the datasets to include different weather,
lighting, and sea conditions for both real and synthetic
data. The expansion of the evaluation to different do-
mains, such as terrestrial SAR operations, can also be
explored. Incorporating different noise sources, like
dust and humidity affecting camera lenses, can further
simulate real-world conditions. According to (Krump
and St
¨
utz, 2021), the main difference between real
and synthetic data, referred to as the ”reality gap,” is
related to general coloration, the absence of noise, and
the lack of fine structures. This opens the possibility
for further research to bridge this gap.
Implementation and testing in real-world scenar-
ios can be explored, evaluating the integration of all
solutions with hardware constraints and associated
challenges. These constraints may include factors
such as different camera resolutions, embedded pro-
cessing power, and image stabilization systems (gim-
bal). Hardware limitations could significantly impact
performance, and comparisons of the current model
YOLOv9t with different architectures can help op-
timize factors such as recognition time, training re-
quirements, and effectiveness.
REFERENCES
Bird, J. J., Faria, D. R., Ek
´
art, A., and Ayrosa, P. P. S.
(2020). From simulation to reality: Cnn transfer learn-
ing for scene classification. In 2020 IEEE 10th Inter-
national Conference on Intelligent Systems (IS), pages
619–625.
Dabbiru, L., Goodin, C., Carruth, D., and Boone, J. (2023).
Object detection in synthetic aerial imagery using
deep learning. In Dudzik, M. C., Jameson, S. M.,
and Axenson, T. J., editors, Society of Photo-Optical
Instrumentation Engineers (SPIE) Conference Series,
volume 12540 of Society of Photo-Optical Instru-
mentation Engineers (SPIE) Conference Series, page
1254002.
DotCam, TK-Master, Zoc, and Elble, S. (2022). Environ-
mentproject. https://github.com/UE4-OceanProject/
Environment-Project.
Everingham, M., Eslami, S. M. A., Gool, L. V., Williams, C.
K. I., Winn, J. M., and Zisserman, A. (2014). The pas-
cal visual object classes challenge: A retrospective.
International Journal of Computer Vision, 111:98 –
136.
Games, E. Unreal engine. https://www.unrealengine.com.
Accessed: 2024-02-22.
G
´
eron, A. (2017). Hands-on machine learning with Scikit-
Learn and TensorFlow : concepts, tools, and tech-
niques to build intelligent systems. O’Reilly Media,
Sebastopol, CA.
IAMSAR, I. (2022). International aeronautical and mar-
itime search and rescue manual. Mission coordina-
tion, 2.
Jayalath, K. and Munasinghe, R. (2021). Drone-based au-
tonomous human identification for search and rescue
missions in real-time. pages 518–523.
Kratzke, T. M., Stone, L. D., and Frost, J. R. (2010). Search
and rescue optimal planning system. In 2010 13th In-
ternational Conference on Information Fusion, pages
1–8.
Krump, M. and St
¨
utz, P. (2020). Uav based vehicle de-
tection with synthetic training: Identification of per-
formance factors using image descriptors and ma-
chine learning. In Modelling and Simulation for
Autonomous Systems: 7th International Conference,
MESAS 2020, Prague, Czech Republic, October 21,
2020, Revised Selected Papers, page 62–85, Berlin,
Heidelberg. Springer-Verlag.
Krump, M. and St
¨
utz, P. (2021). Uav based vehicle detec-
tion with synthetic training: Identification of perfor-
mance factors using image descriptors and machine
learning. In Mazal, J., Fagiolini, A., Vasik, P., and
Turi, M., editors, Modelling and Simulation for Au-
tonomous Systems, pages 62–85, Cham. Springer In-
ternational Publishing.
Lima, L., Andrade, F., Djenouri, Y., Pfeiffer, C., and Moura,
M. (2023). Empowering search and rescue operations
with big data technology: A comprehensive study
of yolov8 transfer learning for transportation safety.
pages 2616–2623.
Lin, T.-Y., Maire, M., Belongie, S., Bourdev, L., Girshick,
R., Hays, J., Perona, P., Ramanan, D., Zitnick, C. L.,
and Doll
´
ar, P. (2015). Microsoft coco: Common ob-
jects in context.
Pettersvold, J., Wiulsrod, M., and Hallgreen, S. (2023).
Synthetic data generation for search and rescue mis-
sions. a novel approach using unreal engine, airsim,
and raycast. Bachelor’s thesis, University of South-
Eastern Norway.
Optimizing Object Detection for Maritime Search and Rescue: Progressive Fine-Tuning of YOLOv9 with Real and Synthetic Data
215