fully tested on various embedded targets with lim-
ited computing capacity and latency, showing that a
pre-trained LSTM or GRU can be embedded on such
devices (for example to complement a Kalman Fil-
ter for fault-tolerance purposes). However, the train-
ing phase is still too greedy in term of computing re-
sources to allow an online training capacity for em-
bedded devices. One benefit of our work is the release
of an open-source package including source code,
data-sets and logs. Therefore, our application with
its implementation details (and the results) are acces-
sible, can be used, reproduced and extended.
Two important future investigations would be to
migrate our implementation to a GPU based version
(using the GPU entity of the Jetson Nano device for
example) and also to create a FPGA based architec-
ture (using the FPGA entity of the Pynq Z2 device for
example). This would enables (1) the possibility to
run large LSTM/GRU neural networks and (2) tackle
online training capacities which can be key for em-
bedded systems algorithms dealing with uncertainties
in their environment. Our first efforts regarding GPU
and FPGA based architectures are very encouraging.
From the LSTM/GRU architecture point of view, we
are currently exploring the possibility of modeling the
autopilot data using a bi-directional configuration for
GRU and LSTM such as it has been done for ma-
chine translation applications (Schuster and Paliwal,
1997) (Sutskever et al., 2014). Last but not least,
even if the results obtained for the proposed test case
were sufficient to make the desired analysis, it has to
be extended. A real life application using RNNs re-
quires more effort for the training part especially on
the training dataset, therefore we are working now on
building a more dense dataset (for example to include
a lot more flight conditions or UAV types).
ACKNOWLEDGEMENTS
This work has been partially supported by
the Defense Innovation Agency (AID) of the
French Ministry of Defense under Grant No.:
2018.60.0072.00.470.75.01.
REFERENCES
Baomar, H. and Bentley, P. (2017). Autonomous landing
and go-around of airliners under severe weather con-
ditions using artificial neural networks. 2017 Work-
shop on Research, Education and Development of Un-
manned Aerial Systems (RED-UAS), pages 162–167.
Baomar, H. and Bentley, P. J. (2016). An intelligent au-
topilot system that learns piloting skills from human
pilots by imitation. In 2016 International Conference
on Unmanned Aircraft Systems (ICUAS), pages 1023–
1031.
Brownlee, J. (2019). Loss and Loss Functions for Training
Deep Learning Neural Networks.
Cho, K., Van Merri
¨
enboer, B., Gulcehre, C., Bahdanau, D.,
Bougares, F., Schwenk, H., and Bengio, Y. (2014).
Learning phrase representations using RNN encoder-
decoder for statistical machine translation. EMNLP
2014 - 2014 Conference on Empirical Methods in Nat-
ural Language Processing, Proceedings of the Confer-
ence, pages 1724–1734.
Chung, J., Gulcehre, C., Cho, K., and Bengio, Y. (2014).
Empirical Evaluation of Gated Recurrent Neural Net-
works on Sequence Modeling. pages 1–9.
Elman, J. L. (1990). Finding structure in time. Cognitive
Science, 14(2):179–211.
Flores, A. and Flores, G. (2020). Transition control of a
tail-sitter uav using recurrent neural networks. In 2020
International Conference on Unmanned Aircraft Sys-
tems (ICUAS), pages 303–309.
Garc
´
ıa, J., Molina, J. M., and Trincado, J. (2020). Real eval-
uation for designing sensor fusion in uav platforms.
Information Fusion, 63:136–152.
Gers, F. A., Schmidhuber, J. A., and Cummins, F. A. (2000).
Learning to forget: Continual prediction with lstm.
Neural Comput., 12(10):2451–2471.
Grigorescu, S. M., Trasnea, B., Cocias, T. T., and Mace-
sanu, G. (2019). A survey of deep learning techniques
for autonomous driving. CoRR, abs/1910.07738.
Haykin, S. (1999). Neural Networks: A Comprehensive
Foundation. Prentice Hall, Upper Saddle River, NJ.
2nd edition.
Irie and Miyake (1988). Capabilities of three-layered per-
ceptrons. In IEEE 1988 International Conference on
Neural Networks, pages 641–648 vol.1.
Kamis¸, S. and Goularas, D. (2019). Evaluation of Deep
Learning Techniques in Sentiment Analysis from
Twitter Data. Proceedings - 2019 International Con-
ference on Deep Learning and Machine Learning in
Emerging Applications, Deep-ML 2019, pages 12–17.
Kingma, D. P. and Ba, J. (2015). Adam: A method for
stochastic optimization. In Bengio, Y. and LeCun,
Y., editors, 3rd International Conference on Learn-
ing Representations, ICLR 2015, San Diego, CA, USA,
May 7-9, 2015, Conference Track Proceedings.
Kulkarni, R., Dhavalikar, S., and Bangar, S. (2018). Traffic
Light Detection and Recognition for Self Driving Cars
Using Deep Learning. Proceedings - 2018 4th Inter-
national Conference on Computing, Communication
Control and Automation, ICCUBEA 2018, pages 2–5.
Minsky, M. and Papert, S. A. (1969). Perceptrons: An Intro-
duction to Computational Geometry. The MIT Press.
Pascanu, R., Mikolov, T., and Bengio, Y. (2013). On
the difficulty of training recurrent neural networks.
In Proceedings of the 30th International Confer-
ence on International Conference on Machine Learn-
ing - Volume 28, ICML’13, page III–1310–III–1318.
JMLR.org.
NCTA 2021 - 13th International Conference on Neural Computation Theory and Applications
382