Table 1: Comparative runtime evaluation (milliseconds) between methods using reinforcement learning, L-Systems, both
sequential and parallel with 8 worker threads.
Number of tiles 1 2 3 4 5 6 7 8 9
Sequential RL 2.03 4.43 7.88 10.4 12.81 15.79 18.21 21.41 23.42
Parallel RL 2.14 3.61 6.33 8.21 9.89 11.83 12.98 13.99 14.61
Sequential LS 1.76 3.98 7.72 10.29 10.63 13.73 18.02 20.33 22.48
Parallel LS 1.99 2.92 5.12 7.71 8.60 11.59 12.33 13.29 14.46
clude terrain elevation. In terms of methodology, we
plan to explore other methods for generating similar
or combined content.
REFERENCES
Bojchevski, A., Shchur, O., Z
¨
ugner, D., and G
¨
unnemann, S.
(2018). Netgan: Generating graphs via random walks.
Chen, G., Esch, G., Wonka, P., Mueller, P., and Zhang, E.
(2008). Interactive procedural street modeling. ACM
Trans. Graph., 27(3):Article 103: 1–10.
Curry, R. (2000). On the evolution of parametric l-systems.
Danny Alberto, E. C., Luo, X., Navarro Newball, A. A.,
Z
´
u
˜
niga, C., and Lozano-Garz
´
on, C. (2019). Realis-
tic behavior of virtual citizens through procedural an-
imation. In 2019 International Conference on Virtual
Reality and Visualization (ICVRV), pages 243–247.
de Ara
´
ujo, L. J. P., Grichshenko, A., Pinheiro, R. L.,
Saraiva, R. D., and Gimaeva, S. (2020). Map gen-
eration and balance in the terra mystica board game
using particle swarm and local search. In Tan, Y., Shi,
Y., and Tuba, M., editors, Advances in Swarm Intelli-
gence, pages 163–175, Cham. Springer International
Publishing.
Dong, J., Liu, J., Yao, K., Chantler, M., Qi, L., Yu, H., and
Jian, M. (2020). Survey of procedural methods for
two-dimensional texture generation. Sensors, 20(4).
Freiknecht, J. and Effelsberg, W. (2017). A survey on the
procedural generation of virtual worlds. Multimodal
Technologies and Interaction, 1(4).
Gissl
´
en, L., Eakins, A., Gordillo, C., Bergdahl, J., and Toll-
mar, K. (2021). Adversarial reinforcement learning
for procedural content generation.
Hartigan, J. A. and Wong, M. A. (1979). A k-means cluster-
ing algorithm. JSTOR: Applied Statistics, 28(1):100–
108.
Hasselt, H. (2010). Double q-learning. In Lafferty, J.,
Williams, C., Shawe-Taylor, J., Zemel, R., and Cu-
lotta, A., editors, Advances in Neural Information
Processing Systems, volume 23. Curran Associates,
Inc.
Heskes, T., Zoeter, O., and Wiegerinck, W. (2004). Approx-
imate expectation maximization. In Thrun, S., Saul,
L., and Sch
¨
olkopf, B., editors, Advances in Neural In-
formation Processing Systems, volume 16. MIT Press.
Kipf, T. N. and Welling, M. (2016). Variational graph auto-
encoders.
Lara-Cabrera, R., Cotta, C., and Fern
´
andez-Leiva, A.
(2012). Procedural map generation for a rts game.
Li, Z., Wegner, J. D., and Lucchi, A. (2019). Topological
map extraction from overhead images. In Proceedings
of the IEEE/CVF International Conference on Com-
puter Vision (ICCV).
Liu, J., Snodgrass, S., Khalifa, A., Risi, S., Yannakakis,
G. N., and Togelius, J. (2020). Deep learning for pro-
cedural content generation. Neural Computing and
Applications, 33(1):19–37.
Mena, J. and Malpica, J. (2005). An automatic method for
road extraction in rural and semi-urban areas starting
from high resolution satellite imagery. Pattern Recog-
nition Letters, 26(9):1201–1220.
Parish, Y. I. H. and M
¨
uller, P. (2001). Procedural model-
ing of cities. In Proceedings of the 28th Annual Con-
ference on Computer Graphics and Interactive Tech-
niques, SIGGRAPH ’01, page 301–308, New York,
NY, USA. Association for Computing Machinery.
Ping, K. and Dingli, L. (2020). Conditional convolutional
generative adversarial networks based interactive pro-
cedural game map generation. In Arai, K., Kapoor,
S., and Bhatia, R., editors, Advances in Information
and Communication, pages 400–419, Cham. Springer
International Publishing.
Reynolds, D. (2009). Gaussian Mixture Models, pages 659–
663. Springer US, Boston, MA.
Rozenberg, G. and Salomaa, A. (1980). Mathematical The-
ory of L Systems. Academic Press, Inc., USA.
Snodgrass, S. and Onta
˜
n
´
on, S. (2017). Learning to gen-
erate video game maps using markov models. IEEE
Transactions on Computational Intelligence and AI in
Games, 9(4):410–422.
Sutton, R. S. and Barto, A. G. (2018). Reinforcement Learn-
ing: An Introduction. A Bradford Book, Cambridge,
MA, USA.
Wang, H., Wang, J., Wang, J., Zhao, M., Zhang, W.,
Zhang, F., Xie, X., and Guo, M. (2017). Graphgan:
Graph representation learning with generative adver-
sarial nets.
Zhou, D., Zheng, L., Xu, J., and He, J. (2019). Misc-gan:
A multi-scale generative model for graphs. Frontiers
in Big Data, 2:3.
ICSOFT 2022 - 17th International Conference on Software Technologies
432