Table 2: Statistics of the number of parts and training
images.
Image Type-1 Type-2 Type-3 Type-4 Total
Left
car door
6 305 450 245 2240 3246
Right car
door
4 240 460 275 2095 3074
Roof 12 245 570 355 2090 3272
The car
rear body
5 285 450 300 2125 3165
The car
front body
3 260 605 385 2155 3408
Wheel 20 305 565 295 2080 3265
Parts
total
50 1905 3100 1855 12785 19695
Image
total
5 4500 10500 5250 6300 26555
4 CONCLUSIONS
This study proposes a set of processing procedures for
the material classification of the part model of the
simulation system to reduce the manual increase of
the part model's material information and reduce the
huge workforce and time. The texture image hand
first uses traditional image processing technology to
segment various parts in the texture image and
generate large texture images. The part classification
of 2D texture images is to overcome a small number
of texture images. The texture image used in the
experiment has a total of 88 parts. After the automatic
segmentation experiment, the number of fine
segmentation is reduced by two than the correct
segmentation of simple segmentation. Still, the
number of over-segment parts is reduced from 62 to
19. The reason is that there are no gaps between the
parts, and the color difference between the
background and the parts is small, resulting in
unsatisfactory segmentation results. The two
segmentation methods have good results in different
texture images. Combine the results of the two
segmentation to obtain automatically segmented
parts. This study automatically segmentation the
texture image parts, changes the arrangement of the
RGB channels, changes the color and rotation of the
parts, etc., and makes various changes to the
background and randomly combines large texture
images of various parts and backgrounds deep
learning model. Training images to improve the
classification accuracy of the parts category.
ACKNOWLEDGEMENTS
This work was supported in part by Ministry of
Science and Technology, Taiwan, under Grant No.
MOST 109-2221-E-025-010.
REFERENCES
Gonzalez, R. C. and Woods, R. E. (2002). “Digital Image
Processing”, Prentice-Hall.
Kubicek, J., Timkovic, J., Penhaker, M., Oczka, D.,
Krestanova, A., Augustynek, M. (2019). "Retinal blood
vessels modeling based on fuzzy sobel edge detection
and morphological segmentationn", Biode-vices, vol.
1, pp. 121-126.
Canny, J. F.(1986). “A Computational Approach to Edge
Detection,” IEEE Transaction on Pattern Analysis and
Machine Intelligence, vol. 8(6), pp. 679–698.
Ding, L. and Goshtasby, A. A. (2001). “On the Canny Edge
Detector,” Pattern Recognition, vol. 34(3), pp. 721–
725.
Xu, H., Xu, X., Xu, Y. (2019). Applying morphology to
improve Canny operator's image segmentation method.
The Journal of Engineering, 2019(23).
Aslam, A., Khan, E.,and Beg, M. M. S. (2015)."Improved
Edge Detection Algorithm for Brain Tumor
Segmentation", Second International Sympos-ium on
Computer Vision and the Internet, vol. 58, pp. 430-437.
Wang, H., Huang, T.-Z., Xu, Z., Wang, Y. (2016). "A two-
stage image segmentation via global and local region
active contours", Neurocomputing, 205 (2016) ,
pp. 130-140
Lorencin, I., Anđelić, N., Španjol, J., and Car, Z. (2020).
“Using multi-layer perceptron with Laplacian edge
detector for bladder cancer diagnosis,” Artificial
Intelligence in Medicine, vol. 102, article 101746.
Pandey, R. K., Karmakar, S., Ramakrishnan, A., and Saha,
N. (2019). "Improving facial emotion recognition
systems using gradient and laplacian images".
Zhao, M., Liu, H., and Wan, Y. (2015)."An improved
Canny Edge Detection Algorithm", IEEE International
Conference on Progress in Informatics and Computing,
pp. 234-237.
Otsu. N. (1979). A threshold selection method from gray-
level histograms. IEEE Transactions of Systems, Man,
and Cybernetics, 9(1). pp. 62-66.