REFERENCES
Alemi, A. A., Poole, B., Fischer, I., Dillon, J. V., Saurous,
R. A., and Murphy, K. (2017). Fixing a broken ELBO.
arXiv preprint arXiv:1711.00464.
Chung, J., Kastner, K., Dinh, L., Goel, K., Courville, A.,
and Bengio, Y. (2015). A Recurrent Latent Variable
Model for Sequential Data. pages 1–9.
Cui, Z., Chen, W., and Chen, Y. (2016). Multi-Scale Con-
volutional Neural Networks for Time Series Classifi-
cation.
Dai, B. and Wipf, D. (2019). Diagnosing and enhancing vae
models. arXiv preprint arXiv:1903.05789.
Dupont, E. (2018). Learning disentangled joint continuous
and discrete representations. In Advances in Neural
Information Processing Systems, pages 710–720.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Ben-
gio, Y. (2014). Generative adversarial nets. In
Advances in neural information processing systems,
pages 2672–2680.
Grochow, K., Martin, S. L., Hertzmann, A., and Popovi
´
c,
Z. (2004). Style-based inverse kinematics. In ACM
transactions on graphics (TOG), volume 23, pages
522–531. ACM.
Ha, S. and Choi, S. (2016). Convolutional neural net-
works for human activity recognition using multi-
ple accelerometer and gyroscope sensors. In 2016
International Joint Conference on Neural Networks
(IJCNN), pages 381–388. IEEE.
Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Gre-
wal, K., Bachman, P., Trischler, A., and Bengio,
Y. (2018). Learning deep representations by mu-
tual information estimation and maximization. arXiv
preprint arXiv:1808.06670.
Hoffman, M. D. and Johnson, M. J. ELBO Surgery: Yet
another way to carve up the evidence lower bound.
Holden, D., Saito, J., and Komura, T. (2016). A deep
learning framework for character motion synthesis
and editing. ACM Transactions on Graphics (TOG),
35(4):138.
Hsu, W.-N., Zhang, Y., and Glass, J. (2017). Unsupervised
learning of disentangled and interpretable representa-
tions from sequential data. In Advances in neural in-
formation processing systems, pages 1878–1889.
Kingma, D. P. and Welling, M. (2013). Auto-encoding vari-
ational bayes. arXiv preprint arXiv:1312.6114.
Li, J., Xu, K., Chaudhuri, S., Yumer, E., Zhang, H., and
Guibas, L. (2017). Grass: Generative recursive au-
toencoders for shape structures. ACM Transactions
on Graphics (TOG), 36(4):52.
Li, Y. and Mandt, S. (2018). Disentangled sequential au-
toencoder. arXiv preprint arXiv:1803.02991.
Maddison, C. J., Mnih, A., and Teh, Y. W. (2016). The con-
crete distribution: A continuous relaxation of discrete
random variables. arXiv preprint arXiv:1611.00712.
Pintelon, R. and Schoukens, J. (2012). System identifica-
tion: a frequency domain approach. John Wiley &
Sons.
Poole, B., Ozair, S., van den Oord, A., Alemi, A. A.,
and Tucker, G. (2018). On variational lower bounds
of mutual information. In NeurIPS Workshop on
Bayesian Deep Learning.
Sanchez-Lengeling, B. and Aspuru-Guzik, A. (2018). In-
verse molecular design using machine learning: Gen-
erative models for matter engineering. Science,
361(6400):360–365.
Schoukens, J., Vaes, M., and Pintelon, R. (2016). Linear
system identification in a nonlinear setting: Nonpara-
metric analysis of the nonlinear distortions and their
impact on the best linear approximation. IEEE Con-
trol Systems Magazine, 36(3):38–69.
Schulman, J., Moritz, P., Levine, S., Jordan, M., and
Abbeel, P. (2015). High-Dimensional Continuous
Control Using Generalized Advantage Estimation.
pages 1–14.
Smith, L. N. (2017). Cyclical learning rates for training
neural networks. In 2017 IEEE Winter Conference on
Applications of Computer Vision (WACV), pages 464–
472. IEEE.
Tolstikhin, I., Bousquet, O., Gelly, S., and Schoelkopf, B.
(2017). Wasserstein auto-encoders. arXiv preprint
arXiv:1711.01558.
Tomczak, J. M. and Welling, M. (2018). VAE with a vamp-
prior. International Conference on Artificial Intel-
ligence and Statistics, AISTATS 2018, pages 1214–
1223.
van den Oord, A., Dieleman, S., Zen, H., Simonyan,
K., Vinyals, O., Graves, A., Kalchbrenner, N., Se-
nior, A., and Kavukcuoglu, K. (2016). Wavenet:
A generative model for raw audio. arXiv preprint
arXiv:1609.03499.
Veli
ˇ
ckovi
´
c, P., Fedus, W., Hamilton, W. L., Li
`
o, P., Bengio,
Y., and Hjelm, R. D. (2018). Deep graph infomax.
arXiv preprint arXiv:1809.10341.
Wampler, K., Popovi
´
c, Z., and Popovi
´
c, J. (2014). Gener-
alizing locomotion style to new animals with inverse
optimal regression. ACM Transactions on Graphics
(TOG), 33(4):49.
Zhang, H., Starke, S., Komura, T., and Saito, J. (2018a).
Mode-adaptive neural networks for quadruped mo-
tion control. ACM Transactions on Graphics (TOG),
37(4):145.
Zhang, Y., Galley, M., Gao, J., Gan, Z., Li, X., Brockett,
C., and Dolan, B. (2018b). Generating informative
and diverse conversational responses via adversarial
information maximization. In Advances in Neural In-
formation Processing Systems, pages 1810–1820.
Zhao, S., Song, J., and Ermon, S. (2019). Infovae: Balanc-
ing learning and inference in variational autoencoders.
In Proceedings of the AAAI Conference on Artificial
Intelligence, volume 33, pages 5885–5892.
Generative Locomotion Model of Snake Robot with Hierarchical Networks for Topological Representation
201