edge, this is the first application of ACAI to drum-
pattern generation.
The interactive visualization of the latent space is
available as a tool to subjectively assess the quality of
the generated percussion patterns.
Finally, the paper explores the possibility to gen-
erate melodies that correspond to the given input pat-
tern and demonstrates that this could be done with a
relatively straight-forward artificial neural network.
ACKNOWLEDGEMENTS
The authors would like to thank Valentina Barsuk for
her constructive advice and profound expertise.
REFERENCES
Agafonova, Y., Tikhonov, A., and Yamshchikov, I. P.
(2020). Paranoid transformer: Reading narrative
of madness as computational approach to creativity.
arXiv preprint arXiv:2007.06290.
Berthelot, D., Raffel, C., Roy, A., and Goodfellow, I.
(2018). Understanding and improving interpolation
in autoencoders via an adversarial regularizer. arXiv
preprint arXiv:1807.07543.
Boulanger-Lewandowski, N., Bengio, Y., and Vincent, P.
(2012). Modeling temporal dependencies in high-
dimensional sequences: Application to polyphonic
music generation and transcription. In Proceedings
of the 29th International Coference on International
Conference on Machine Learning 2012, pages 1881–
1888.
Bowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Joze-
fowicz, R., and Bengio, S. (2016). Generating sen-
tences from a continuous space. In 20th SIGNLL Con-
ference on Computational Natural Language Learn-
ing, pages 10–21.
Briot, J.-P., Hadjeres, G., and Pachet, F.-D. (2019). Deep
learning techniques for music generation-a survey.
Choi, K., Fazekas, G., and Sandler, M. (2016). Text-based
lstm networks for automatic music composition. In
arXiv preprint.
Chu, H., Urtasun, R., and Fidler, S. (2016). Song from pi: A
musically plausible network for pop music generation.
In arXiv preprint.
Chuan, C.-H. and Herremans, D. (2018). Modeling tempo-
ral tonal relations in polyphonic music through deep
networks with a novel image-based representation. In
AAAI, pages 2159–2166.
Colombo, F., Muscinelli, S. P., Seeholzer, A., Brea, J., ,
and Gerstner, W. (2016). Algorithmic composition
of melodies with deep recurrent neural networks. In
arXiv preprint.
Gillick, J., Roberts, A., Engel, J., Eck, D., and Bamman,
D. (2019). Learning to groove with inverse sequence
transformations. arXiv preprint arXiv:1905.06118.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial nets. In
Advances in neural information processing systems,
pages 2672–2680.
Hill, S. (2011). Markov melody generator. Computer Sci-
ence Department, University of Massachusetts Low-
ell, Published on Dec, 11.
Hiller, L. and Isaacson, L. (1959). Experimental Music.
Composition with an Electronic Computer. McGraw-
Gill Company.
Johnson, D. D. (2017a). Generating polyphonic music us-
ing tied parallel networks. International Conference
on Evolutionary and Biologically Inspired Music and
Art, pages 128–143.
Johnson, D. D. (2017b). Generating polyphonic music us-
ing tied parallel networks. In International conference
on evolutionary and biologically inspired music and
art, pages 128–143. Springer.
Kingma, D. P. and Welling, M. (2014). Auto-encoding vari-
ational bayes. In arXiv preprint.
Lattner, S. and Grachten, M. (2019). High-level control of
drum track generation using learned patterns of rhyth-
mic interaction. In 2019 IEEE Workshop on Appli-
cations of Signal Processing to Audio and Acoustics
(WASPAA), pages 35–39. IEEE.
Lin, H. W. and Tegmark, M. (2017). Critical behavior in
physics and probabilistic formal languages. Entropy,
19(7):299.
Lovelace, A. (1843). Notes on l menabrea’s sketch of the
analytical engine by charles babbage, esq. In Taylor’s
Scientific Memoirs.
Lyu, Q., Wu, Z., Zhu, J., and Meng, H. (2015). Modelling
high-dimensional sequences with lstm-rtrbm: Appli-
cation to polyphonic music generation. In Twenty-
Fourth International Joint Conference on Artificial In-
telligence.
Rezende, D. J., Mohamed, S., and Wierstra, D. (2014).
Stochastic backpropagation and approximate infer-
ence in deep generative models. ICML, pages 1278–
1286.
Roberts, A., Engel, J., Raffel, C., Hawthorne, C., and Eck,
D. (2018). A hierarchical latent vector model for
learning long-term structure in music. arXiv preprint
arXiv:1803.05428.
Semeniuta, S., Severyn, A., and Barth, E. (2017). A hybrid
convolutional variational autoencoder for text gener-
ation. In Proceedings of the 2017 Conference on
Empirical Methods in Natural Language Processing,
pages 627–637.
Sundermeyer, M., Schl
¨
uter, R., and Ney, H. (2015). Lstm
neural networks for language modeling. Interspeech,
pages 194–197.
Wu, J., Hu, C., Wang, Y., Hu, X., and Zhu, J. (2019).
A hierarchical recurrent neural network for symbolic
melody generation. IEEE Transactions on Cybernet-
ics, 50(6):2749–2757.
Yamshchikov, I. and Tikhonov, A. (2018). I feel you:
What makes algorithmic experience personal? In EVA
Copenhagen.
Yamshchikov, I. P. and Tikhonov, A. (2020). Music genera-
tion with variational recurrent autoencoder supported
by history. SN Applied Sciences, 2(12):1–7.
COMPLEXIS 2021 - 6th International Conference on Complexity, Future Information Systems and Risk
44