stochastic optimization. We’ve found that this opti-
mizer is particularly applicable to our problem due
to its effective handling of sparse data representations
like ours.
4 RESULTS
Our Deep Convolutional Neural Network was trained
for 20 epochs, a greater than 80% successful classi-
fication rate in all runs of the network, and a 82.7%
success rate in the best case. The training data was a
random 80% of 200,000 board states pulled randomly
from our novel data set, and the validation data was
made up of the remaining 20% of that 200,000. Our
greater than 80% success rate is notable in a number
of ways. Notably, this level of accuracy allows for far
more successful dynamic match prediction based on
a single board than in previous works. In the more
tractable domain of chess dynamic match prediction,
Masud et al. (Masud et al., 2015) achieved a success
rate of nearly 66% under similar conditions. Related
attempts at binary classification have confronted sim-
pler problems, particularly when attempting to tackle
the difficult domain of shogi. Grimbergen was able to
achieve a success rate greater than 80% but on the far
more manageable problem of whether or not the king
was in danger, rather than determining the predicted
winner of the entire game from a single board. Our
promising results indicate a significant step forward
in deterministic board game classification and repre-
sent a number of new opportunities for game playing
agents that can be created without the prohibitive cost
of standard evaluation functions seen in other state of
the art programs.
5 CONCLUSIONS
Our results display several meaningful steps forward
in the domain of classifying shogi board states and
evaluating the position of a player in more efficient
ways than has been shown previously. Our strategy
of using DCNN-based classification allows us to give
an accurate estimate towards the winner of a game
of shogi without any input from a subject matter ex-
pert, instead using an online match predictor. This
classification method can also be implemented and
used by developers with little to no experience in the
domain, due to the algorithm being agnostic of any
game rules or heuristics. Additionally, we are able
to make these predictions with a small fraction of
the computational resources and temporal resources
of other large state of the art algorithms. With our re-
Figure 4: The resultant classification accuracy of ten runs of
our DCNN classifier. These ten runs were executed sequen-
tially with consistent parameters and random subsections of
our one million board state dataset, as determined by our
80/20 training schedule.
sults achieving over 80% of accuracy in predicting on-
line match outcomes, this contribution presents itself
as a reasonable alternative to classification of single
board states in comparison to the high computation
time that other shogi engines propose. This efficiency
allows us to train this classifier in a matter of minutes
to hours on a standard desktop computer or laptop.
Consequently, this sophisticated shogi analysis can be
accessible to a broader audience that vary within skill
levels. Potential research plans will explore further
optimizations and applications as well as extending
these techniques to other complex strategy games and
enhancing their educational and competitive use.
REFERENCES
Campbell, M., Hoane, A., and Hsu, F.-h. (2002). Deep blue.
Artificial Intelligence, 134(1–2):57–83.
Grimbergen, R. (1997). Pattern recognition for candidate
generation in the game of shogi. CiteSeerX. Accessed:
2024-02-28.
Kingma, D. P. and Ba, J. (2014). Adam: A method for
stochastic optimization.
Masud, M. M., Al-Shehhi, A., Al-Shamsi, E., Al-Hassani,
S., Al-Hamoudi, A., and Khan, L. (2015). Online
Prediction of Chess Match Result, page 525–537.
Springer International Publishing.
Samuel, A. L. (1959). Some studies in machine learning
using the game of checkers. IBM Journal of Research
and Development, 3(3):210–229.
Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan,
K., Sifre, L., Schmitt, S., Guez, A., Lockhart, E.,
Hassabis, D., Graepel, T., Lillicrap, T., and Sil-
ver, D. (2020). Mastering atari, go, chess and
NCTA 2024 - 16th International Conference on Neural Computation Theory and Applications
604