of 3D waypoints are generated. By selecting the de-
sired camera model with known camera parameters,
the trajectories can be projected to the observer’s per-
spective. To demonstrate the applicability of the syn-
thetic trajectory data, we show that an RNN-MDN
prediction model solely trained on the synthetically
generated data is able to outperform classic reference
models on a real-world UAV tracking dataset.
REFERENCES
Abadi, M. et al. (2015). TensorFlow: Large-scale machine
learning on heterogeneous systems. Software avail-
able from tensorflow.org. 5
Alahi, A., Goel, K., Ramanathan, V., Robicquet, A., Fei-
Fei, L., and Savarese, S. (2016). Social LSTM: Hu-
man Trajectory Prediction in Crowded Spaces. In
Conference on Computer Vision and Pattern Recog-
nition (CVPR), pages 961–971. 2, 5
Amirian, J., Hayet, J.-B., and Pettre, J. (2019). Social
Ways: Learning Multi-Modal Distributions of Pedes-
trian Trajectories With GANs. In Conference on
Computer Vision and Pattern Recognition Workshops
(CVPRW), pages 2964–2972. 2
Bar-Shalom, Y., Kirubarajan, T., and Li, X.-R. (2002). Esti-
mation with Applications to Tracking and Navigation.
John Wiley & Sons, Inc., New York, NY, USA. 7
Becker, S., Hug, R., H
¨
ubner, W., and Arens, M. (2018).
RED: A simple but effective Baseline Predictor for
the TrajNet Benchmark. In The European Confer-
ence on Computer Vision Workshops (ECCVW), vol-
ume 11131 of Lecture Notes in Computer Science,
pages 138–153. Springer. 2, 7
Bertsekas, D. (1999). Nonlinear Programming. Athena Sci-
entific. 3
Bock, J., Krajewski, R., Moers, T., Runde, S., Vater, L., and
Eckstein, L. (2020). The ind dataset: A drone dataset
of naturalistic road user trajectories at german inter-
sections. 2020 IEEE Intelligent Vehicles Symposium
(IV), pages 1929–1934. 1
Christnacher, F., Hengy, S., Laurenzis, M., Matwyschuk,
A., Naz, P., Schertzer, S., and Schmitt, G. (2016). Op-
tical and acoustical UAV detection. In Kamerman,
G. and Steinvall, O., editors, Electro-Optical Remote
Sensing X, volume 9988, pages 83 – 95. International
Society for Optics and Photonics, SPIE. 2
Craig, J. (1989). Introduction to Robotics: Mechanics and
Control. Addison-Wesley Longman Publishing Co.,
Inc., USA, 2nd edition. 3
Deo, N. and Trivedi, M. M. (2018). Multi-modal trajec-
tory prediction of surrounding vehicles with maneuver
based lstms. In 2018 IEEE Intelligent Vehicles Sym-
posium (IV), pages 1179–1184. 5
Giuliari, F., Hasan, I., Cristani, M., and Galasso, F. (2021).
Transformer Networks for Trajectory Forecasting.
In International Conference on Pattern Recognition
(ICPR), pages 10335–10342. 2
Graves, A. (2014). Generating sequences with recurrent
neural networks. 5
Gupta, A., Johnson, J., Fei-Fei, L., Savarese, S., and Alahi,
A. (2018). Social GAN: Socially Acceptable Trajec-
tories with Generative Adversarial Networks. In Con-
ference on Computer Vision and Pattern Recognition
(CVPR), pages 2255–2264. 2
Hammer, M., Borgmann, B., Hebel, M., and Arens, M.
(2019). UAV detection, tracking, and classification by
sensor fusion of a 360° lidar system and an alignable
classification sensor. In Turner, M. D. and Kamerman,
G. W., editors, Laser Radar Technology and Applica-
tions XXIV, volume 11005, pages 99 – 108. Interna-
tional Society for Optics and Photonics, SPIE. 2
Hammer, M., Hebel, M., Laurenzis, M., and Arens, M.
(2018). Lidar-based detection and tracking of small
UAVs. In Buller, G. S., Hollins, R. C., Lamb, R. A.,
and Mueller, M., editors, Emerging Imaging and Sens-
ing Technologies for Security and Defence III; and
Unmanned Sensors, Systems, and Countermeasures,
volume 10799, pages 177 – 185. International Society
for Optics and Photonics, SPIE. 2
Hochreiter, S. and Schmidhuber, J. (1997). Long Short-
Term Memory. Neural Computation, 9(8):1735–
1780. 5
Hug, R., Becker, S., H
¨
ubner, W., and Arens, M. (2021).
Quantifying the complexity of standard benchmark-
ing datasets for long-term human trajectory predic-
tion. IEEE Access, 9:77693–77704. 6
Hug, R., Becker, S., H
¨
ubner, W., and Arens, M. (2020).
A short note on analyzing sequence complexity in tra-
jectory prediction benchmarks. In Workshop on Long-
term Human Motion Prediction (LHMP). 6
Jeon, S., Shin, J., Lee, Y., Kim, W., Kwon, Y., and Yang,
H. (2017). Empirical study of drone sound detection
in real-life environment with deep neural networks. In
European Signal Processing Conference (EUSIPCO),
pages 1858–1862. 2
Jiang, N., Wang, K., Peng, X., Yu, X., Wang, Q., Xing, J.,
Li, G., Zhao, J., Guo, G., and Han, Z. (2021). Anti-
uav: A large multi-modal benchmark for uav tracking.
2, 4, 6, 7
Kartashov, V., Oleynikov, V., Koryttsev, I., Sheiko, S.,
Zubkov, O., Babkin, S., and Selieznov, I. (2020). Use
of acoustic signature for detection, recognition and
direction finding of small unmanned aerial vehicles.
In 2020 IEEE 15th International Conference on Ad-
vanced Trends in Radioelectronics, Telecommunica-
tions and Computer Engineering (TCSET), pages 1–4.
2
Kingma, D. and Ba, J. (2015). Adam: A method for
stochastic optimization. In International Conference
on Learning Representations (ICLR). 5
Kothari, P., Kreiss, S., and Alahi, A. (2021). Human trajec-
tory forecasting in crowds: A deep learning perspec-
tive. IEEE Transactions on Intelligent Transportation
Systems, pages 1–15. 1, 2
Laurenzis, M., Rebert, M., Schertzer, S., Bacher, E., and
Christnacher, F. (2020). Prediction of MUAV flight
behavior from active and passive imaging in complex
ROBOVIS 2021 - 2nd International Conference on Robotics, Computer Vision and Intelligent Systems
20