
6 CONCLUSION
In this work, we presented a method to automatically
generate realistic rendered images using 3D computer
graphics software Blender. Additionally, we devel-
oped algorithms to procedurally generate two com-
mon defect types in injection molding: short shots
and color streaks. Our approach also automates the
generation of semantic segmentation labels for parts
and defects, eliminating the labor-intensive process of
manual labeling. This significantly reduces the effort
required to train AI models with synthetic data and
may be used as foundation to further explore the use
of synthetic data for semantic segmenation in quality
inspection tasks.
Our analysis showed that synthetic image data en-
hances the segmentation of inspection objects, even
with sparse real training data. However, improve-
ments in defect segmentation were modest, and sim-
ply increasing the size of synthetic datasets did not
yield consistent benefits. Severe class imbalances,
with defect pixels being both fewer and less fre-
quently represented, likely contributed to this limita-
tion. Future work should address these issues by re-
fining loss functions and carefully balancing synthetic
and real data, particularly for underrepresented defect
classes.
One of the most significant findings is the poten-
tial of synthetic data for domain expansion, particu-
larly for enabling inspection of parts in new colors or
other variations not present in the real training data.
ACKNOWLEDGEMENTS
This research was funded by Bundesministerium
f
¨
ur Bildung und Forschung (BMBF) grant number
01IS22019 A-E.
REFERENCES
Blender (2018). Blender - a 3D modelling and render-
ing package. Blender Foundation, Stichting Blender
Foundation, Amsterdam.
Boikov, A., Payor, V., Savelev, R., and Kolesnikov, A.
(2021). Synthetic data generation for steel defect de-
tection and classification using deep learning. Symme-
try, 13(7):1176.
Bosnar, L., Hagen, H., and Gospodnetic, P. (2023). Pro-
cedural defect modeling for virtual surface inspection
environments. IEEE Computer Graphics and Appli-
cations, 43(2):13–22.
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., and
Adam, H. (2018). Encoder-decoder with atrous sepa-
rable convolution for semantic image segmentation. In
Proceedings of the European conference on computer
vision (ECCV), pages 801–818.
Delgado, G., Cort
´
es, A., Garc
´
ıa, S., Loyo, E., Berasategi,
M., and Aranjuelo, N. (2023). Methodology for gen-
erating synthetic labeled datasets for visual container
inspection. Transportation Research Part E: Logistics
and Transportation Review, 175:103174.
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial networks.
Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B.,
Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V.,
et al. (2019). Searching for mobilenetv3. In Pro-
ceedings of the IEEE/CVF international conference
on computer vision, pages 1314–1324.
Paulin, G. and Ivasic-Kos, M. (2023). Review and analy-
sis of synthetic dataset generation methods and tech-
niques for application in computer vision. Artificial
intelligence review, 56(9):9221–9265.
Ross, T.-Y. and Doll
´
ar, G. (2017). Focal loss for dense
object detection. In proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 2980–2988.
Schraml, D., Trambitckii, K., and Notni, G. (2024). Pixla-
belcv - labeling images for semantic segmentation
fast, pixel-precise and offline. In Proceedings of the
32nd International Conference in Central Europe on
Computer Graphics, Visualization and Computer Vi-
sion (WSCG 2024), volume 3401 of Computer Sci-
ence Research Notes (CSRN), pages 47–55, Plze
ˇ
n,
Czech Republic.
Tan, M. and Le, Q. V. (2020). Efficientnet: Rethinking
model scaling for convolutional neural networks.
Tsirikoglou, A., Eilertsen, G., and Unger, J. (2020). A sur-
vey of image synthesis methods for visual machine
learning. In Computer graphics forum, volume 39,
pages 426–451. Wiley Online Library.
VISAPP 2025 - 20th International Conference on Computer Vision Theory and Applications
510