increased as the learning process progressed, while
there is a sign of overtraining. Such patterns could
also be discerned from the experiment rounds
numbered 4.2 through 5.3.
5 DISCUSSION
In the experiment covering numbers 1.1 through 1.6
specified in Table 1, the rate of accuracy is the highest
on the motions having a label value of 1 (attacking
motions) when the impact width is 7. In addition, as
evident in the results shown in Table 2, the rate of
accuracy of label value per image improved by
roughly 20% when the boundary label was 99. This
might mean that there were motions with features that
would prove effective for motion identification that
exit within a certain range centering on the offensive
impact, while there were those other motions
immediately preceding and following the
aforementioned range having features that lowers the
motion identification accuracy. Phenomena similar to
these were also encountered in a separate experiment
conducted previously that used optical motion
capture (Sato and Kuriyama, 2011). It’s believed that
the motions having effective and non-effective
features might be the same as the motion
identification that uses the joint position data
generated with a human skeletal model, and also as
the motion identification based on image data.
As the fairly high rate of accuracy of about 90%
could be achieved on attacking motions as specified
in Table 4, and given how accurately the offensive
impacts that occurred in each experiment could be
captured, it could be surmised from a comprehensive
viewpoint that the desired level of CNN-assisted
motion identification accuracy, which is the objective
of this study, might have been achieved, which is
necessary for connecting to a system capable of
identifying various motions at a more advanced level.
6 CONCLUSION
As there has been no published study on basic
methods for acquiring motion data and for identifying
various motions in karate kumite competition, this
study conducted CNN-assisted motion identification
experiments that acquired data using overhead video
cameras placed above the contestants so that they
wouldn’t interfere with the contests in progress.
CNNs such as one used in this study are connected to
advanced sports-specific extrapolation systems such
LSTM, on which there have been published studies
focusing on other sports
(Tsunoda et al., 2017). While
each CNN to which the aforementioned connection is
made must possess a certain level of motion
identification accuracy, sufficient results have been
achieved during the experiments conducted in this
study.
Explained above is the basic method that this
study suggests for effective data acquisition and
motion identification applicable to karate kumite
contests. In terms of what could be improved in the
future, it will be necessary to check the efficacy of the
method when the number of cameras is increased, and
also to conduct experiments on the identification of
motions that more closely resemble the contestants’
actual movements in official competitive matches.
REFERENCES
Hachaj, T., Marek R. O., and Katarzyna K. (2015).
Application of Assistive Computer Vision Methods to
Oyama Karate Techniques Recognition. Symmetry,
1670-1698.
Ibrahim, M. S., Muralidharan, S., Deng, Z., Vahdat, A., and
Mori, G. (2016). A Hierarchical Deep Temporal Model
for Group Activity Recognition. Proceedings of the
IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 2016, pp. 1971-1980.
Kwon, D. Y., and Gross, M. (2005). Combining Body
Sensors and Visual Sensors for Motion Training.
The 2005 ACM SIGCHI International Conference on
Advances in computer entertainment technology.
Kwon, T., Cho, Y., Park, S. Il., and hin, S. Y. (2008). Two-
Character Motion Analysis and Synthesis. IEEE
Transactions on Visualization and Computer Graphics,
14(3), 707-720.
Nakai, M., Tsunoda, Y., Hayashi, H., and Murakoshi, H.
(2018). Prediction of Basketball Free Throw Shooting
by OpenPose. JSAI International Symposium on
Artificial Intelligence, 435-446.
Mora, S. V., and Knottenbelt, W. J. (2017). Deep Learning
for Domain-Specific Action Recognition in Tennis.
2017 IEEE Conference on CVPRW, 170-178(online).
Sasaki, K. (2018). 3D Sensing Technology for Real-Time
Quantification of Athletes' Movements. Fujitsu, 13-20
in Japan.
Sato, K., and Kuriyama, S. (2011). Classification of karate
motion using feature learning. 2011 by Information
Processing Society of Japan, 75-80 in Japan.
Takasaki, C., Takefusa, A., Nakada, H., and Oguchi, M.
(2019). A Study on Action Recognition Method with
Estimated Pose by using RNN. 2019 Information
Processing Society of Japan in Japan.
Tomimori, H., Murakami, R., Sato, T., and Sasaki, K.
(2020). A Judging Support System for Gymastics Using
3D Sensing. Journal of the Robotics Society of Japan in
Japan.