number of selected pixels. Figure 11 shows that pro-
posed method 1 achieved reliable matching in almost
the same processing time as the CPTM and CoP-
TM methods. Figure 10 and 11 show that proposed
method 1 achieves the same performance as YOLOv5
despite its short off-line processing time (3 [sec]),
with a processing time of 58 [msec] and a recognition
success rate of 94%, while YOLOv5, a comparative
learning-based method, had a processing time of 55
[msec] and a recognition success rate of 91%. The re-
sults of proposed method 2, shown in Figure 10, con-
firm that preferentially selecting pixels that are robust
to similar objects is effective in improving the recog-
nition rate.
5 CONCLUSIONS
In this study, we proposed fast image matching
method that uses only effective pixels for matching on
the basis of two measures from color and grayscale
images. Experiments using 100 real images showed
that when approximately 0.5% (68 pixels) of the
117 × 117 template image was used, the recognition
success rate was 80% and the processing time was 5.9
msec. When 5.0% (648 pixels) was used, the success
rate was 98% and the processing time was 80 msec,
confirming that both high speed and high reliability
are possible. The recognition rate of proposed method
decreases in the presence of disturbances such as ro-
tation, illumination change, and shading. We would
like to improve the method by adding images that in-
clude highlights and illumination changes to the pos-
itive samples and by improving the pixel selection al-
gorithm. In addition, since we used our own datasets
for this experiment, we would like to experiment with
public datasets(Wu et al., 2013) in the future.
REFERENCES
Alcantarilla, P. F. and Solutions, T. (2011). Fast ex-
plicit diffusion for accelerated features in nonlinear
scale spaces. IEEE Trans. Patt. Anal. Mach. Intell,
34(7):1281–1298.
Canny, J. (1986). A computational approach to edge de-
tection. IEEE Transactions on pattern analysis and
machine intelligence, (6):679–698.
Dekel, T., Oron, S., Rubinstein, M., Avidan, S., and Free-
man, W. T. (2015). Best-buddies similarity for robust
template matching. In Proceedings of the IEEE con-
ference on computer vision and pattern recognition,
pages 2021–2029.
Dubuisson, M.-P. and Jain, A. K. (1994). A modified haus-
dorff distance for object matching. In Proceedings of
12th international conference on pattern recognition,
volume 1, pages 566–568. IEEE.
Hashimoto, M., Fujiwara, T., Koshimizu, H., Okuda, H.,
and Sumi, K. (2010). Extraction of unique pix-
els based on co-occurrence probability for high-speed
template matching. In 2010 International Symposium
on Optomechatronic Technologies, pages 1–6. IEEE.
Jocher, G., Nishimura, K., Mineeva, T., and Vilari
˜
no, R.
(2020). yolov5. Code repository.
Kat, R., Jevnisek, R., and Avidan, S. (2018). Matching pix-
els using co-occurrence statistics. In Proceedings of
the IEEE Conference on Computer Vision and Pattern
Recognition, pages 1751–1759.
Korman, S., Milam, M., and Soatto, S. (2018). Oatm:
Occlusion aware template matching by consensus set
maximization. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages
2675–2683.
Korman, S., Reichman, D., Tsur, G., and Avidan, S. (2013).
Fast-match: Fast affine template matching. In Pro-
ceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pages 2331–2338.
Lowe, D. G. (1999). Object recognition from local scale-
invariant features. In Proceedings of the seventh
IEEE international conference on computer vision,
volume 2, pages 1150–1157. Ieee.
Shevlev, I. and Avidan, S. (2019). Co-occurrence neural
network. In Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition, pages
4797–4804.
Tagami, R., Eba, S., Nakabayashi, N., Akizuki, S., and
Hashimoto, M. (2022). Template matching using a
small number of pixels selected by distinctiveness of
quantized hue values. In International Workshop on
Advanced Imaging Technology (IWAIT) 2022, volume
12177, pages 662–667. SPIE.
Talmi, I., Mechrez, R., and Zelnik-Manor, L. (2017). Tem-
plate matching with deformable diversity similarity. In
Proceedings of the IEEE Conference on Computer Vi-
sion and Pattern Recognition, pages 175–183.
Wu, Y., Lim, J., and Yang, M.-H. (2013). Online object
tracking: A benchmark. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recogni-
tion (CVPR).
Xiao, J. and Wei, H. (2014). Scale-invariant contour seg-
ment context in object detection. Image and Vision
Computing, 32(12):1055–1066.
Yu, Q., Wei, H., and Yang, C. (2017). Local part chamfer
matching for shape-based object detection. Pattern
Recognition, 65:82–96.
VISAPP 2023 - 18th International Conference on Computer Vision Theory and Applications
614