fected every time the tool parameters change. A sys-
tem based on the proposed methodology would allow
not only the assessment of the reliability of inference
results but also the automation of the process of tra-
ining data selection by indicating the optimal number
of samples needed.
The machine learning models deployed as part of
this experiment respond differently to the variability
of the deep features extracted from input images, sho-
wing high robustness even to significant changes in
some of them while simultaneously being highly sen-
sitive to others. For this reason, further work would
be required to improve the proposed methodology for
determining the degree of dissimilarity of the data by
developing methods that are less general and closely
related to the character of the processed data. The con-
sidered approaches are the use of feature extractors
based on image classifiers trained on images of cut-
ting tools acquired with the developed vision system
or direct determination of inference quality with the
use of a regression model.
ACKNOWLEDGEMENTS
We want to thank Wojciech Szel ˛ag and Mariusz
Mrzygłod from the Wrocław University of Techno-
logy, who took an active part in the development and
construction of the machine vision system used, as
well as the staff at TCM Poland for providing the ma-
terials necessary to carry out the work described in
this paper.
REFERENCES
Bouzakis, K.-D., Kombogiannis, S., Antoniadis, A., and
Vidakis, N. (2001). Gear Hobbing Cutting Pro-
cess Simulation and Tool Wear Prediction Models .
Journal of Manufacturing Science and Engineering,
124(1):42–51.
Buzuti, L. F. and Thomaz, C. E. (2023). Fréchet autoen-
coder distance: A new approach for evaluation of ge-
nerative adversarial networks. Computer Vision and
Image Understanding, 235:103768.
Dalva, Y., Pehlivan, H., Altındi¸s, S. F., and Dundar, A.
(2023). Benchmarking the robustness of instance seg-
mentation models. IEEE Transactions on Neural Ne-
tworks and Learning Systems, pages 1–15.
Deng, W. and Zheng, L. (2020). Are labels neces-
sary for classifier accuracy evaluation? CoRR,
abs/2007.02915.
Dong, X., Liao, C., Shin, Y., and Zhang, H. (2016). Ma-
chinability improvement of gear hobbing via process
simulation and tool wear predictions - the internatio-
nal journal of advanced manufacturing technology.
Gerth, J. L. (2012). Tribology at the cutting edge: a study
of material transfer and damage mechanisms in metal
cutting. PhD thesis, Acta Universitatis Upsaliensis.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep resi-
dual learning for image recognition.
Kirillov, A., Wu, Y., He, K., and Girshick, R. (2019). Poin-
tRend: Image segmentation as rendering.
Nguyen, A., Yosinski, J., and Clune, J. (2015). Deep neural
networks are easily fooled: High confidence predic-
tions for unrecognizable images. In Proceedings of
the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR).
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan,
D., Goodfellow, I., and Fergus, R. (2014). Intriguing
properties of neural networks.
Umbaugh, S. (2005). Computer Imaging: Digital Image
Analysis and Processing. A CRC Press book. Taylor
& Francis.
Wang, D., Hong, R., and Lin, X. (2021). A method for
predicting hobbing tool wear based on cnc real-time
monitoring data and deep learning. Precision Engine-
ering, 72:847–857.
Wu, Y., Kirillov, A., Massa, F., Lo, W.-Y., and Gir-
shick, R. (2019). Detectron2. https://github.com/
facebookresearch/detectron2.
VISAPP 2024 - 19th International Conference on Computer Vision Theory and Applications
366