REFERENCES
Balaji, B., Mallya, S., Genc, S., Gupta, S., Dirac,
L., Khare, V., Roy, G., Sun, T., Tao, Y.,
Townsend, B., Calleja, E., and Muralidhara, S. (2019).
Deepracer: Educational autonomous racing platform
for experimentation with sim2real reinforcement
learning.
Caltagirone, L., Bellone, M., Svensson, L., and
Wahde, M. (2017). Lidar-based driving path
generation using fully convolutional neural networks.
IEEE International Conference on Intelligent
Transportation Systems 2017.
Courtney-Long, E., Carroll, D., Zhang, Q., Stevens, A.,
Griffin-Blake, S., Armour, B., and Campbell, V.
(2015). Prevalence of disability and disability type
among adults — united states, 2013. MMWR.
Morbidity and mortality weekly report, 64:777–783.
Dai, J., He, K., and Sun, J. (2016). Instance-aware semantic
segmentation via multi-task network cascades. In The
IEEE Conference on Computer Vision and Pattern
Recognition (CVPR).
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A.,
and Koltun, V. (2017). CARLA: An open urban
driving simulator. In Proceedings of the 1st Annual
Conference on Robot Learning, pages 1–16.
Epic Games (2019). Unreal engine. https://www.unreal
engine.com. [Online]; accessed August 2020.
Faria, B. M., Reis, L. P., and Lau, N. (2014). A survey
on intelligent wheelchair prototypes and simulators.
In Rocha,
´
A., Correia, A. M., Tan, F. . B., and
Stroetmann, K. . A., editors, New Perspectives
in Information Systems and Technologies, Volume
1, pages 545–557, Cham. Springer International
Publishing.
Giuffrida, G., Meoni, G., and Fanucci, L. (2019). A
yolov2 convolutional neural network-based human–
machine interface for the control of assistive robotic
manipulators. Applied Sciences, 9(11):2243.
GPII DeveloperSpace (2020). What is physical disability?
https://ds.gpii.net/content/ what-physical-disability.
[Online]; accessed August 2020.
Hochreiter, S. and Schmidhuber, J. (1997). Long short-term
memory. Neural computation, 9(8):1735–1780.
Kaelbling, L. P., Littman, M. L., and Moore, A. W.
(1996). Reinforcement learning: A survey. Journal
of artificial intelligence research, 4:237–285.
Lample, G. and Chaplot, D. S. (2016). Playing fps
games with deep reinforcement learning. ArXiv,
abs/1609.05521.
Leaman, J. and La, H. M. (2017). A comprehensive
review of smart wheelchairs: Past, present, and
future. IEEE Transactions on Human-Machine
Systems, 47(4):486–499.
Lillicrap, T., Hunt, J., Pritzel, A., Heess, N., Erez, T., Tassa,
Y., Silver, D., and Wierstra, D. (2015). Continuous
control with deep reinforcement learning. CoRR.
Merkel, D. (2014). Docker: lightweight linux containers
for consistent development and deployment. Linux
journal, 2014(239):2.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A.,
Antonoglou, I., Wierstra, D., and Riedmiller, M.
(2013). Playing atari with deep reinforcement
learning.
Nguyen, A. V., Nguyen, L. B., Su, S., and Nguyen, H. T.
(2013a). The advancement of an obstacle avoidance
bayesian neural network for an intelligent wheelchair.
In 2013 35th Annual International Conference of the
IEEE Engineering in Medicine and Biology Society
(EMBC), pages 3642–3645.
Nguyen, J. S., Su, S. W., and Nguyen, H. T. (2013b).
Experimental study on a smart wheelchair system
using a combination of stereoscopic and spherical
vision. In 2013 35th Annual International Conference
of the IEEE Engineering in Medicine and Biology
Society (EMBC), pages 4597–4600.
Pinheiro, O. R., Alves, L. R. G., Romero, M. F. M.,
and de Souza, J. R. (2016). Wheelchair simulator
game for training people with severe disabilities. In
2016 1st International Conference on Technology and
Innovation in Sports, Health and Wellbeing (TISHW),
pages 1–8.
Pithon, T., Weiss, T., Richir, S., and Klinger, E. (2009).
Wheelchair simulators: A review. Technology and
Disability, 21:1–10.
Rasshofer, R. H. and Gresser, K. (2005). Automotive radar
and lidar systems for next generation driver assistance
functions. Advances in Radio Science, 3.
Sch
¨
oner, H.-P. (2018). Simulation in development and
testing of autonomous vehicles. In Bargende, M.,
Reuss, H.-C., and Wiedemann, J., editors, 18.
Internationales Stuttgarter Symposium, pages 1083–
1095, Wiesbaden. Springer Fachmedien Wiesbaden.
Shi, X., Chen, Z., Wang, H., Yeung, D.-Y., Wong,
W. K., and WOO, W.-c. (2015). Convolutional
lstm network: A machine learning approach for
precipitation nowcasting.
Tokic, M. (2010). Adaptive ε-greedy exploration in
reinforcement learning based on value differences.
In Dillmann, R., Beyerer, J., Hanebeck, U. D.,
and Schultz, T., editors, KI 2010: Advances in
Artificial Intelligence, pages 203–210. Springer Berlin
Heidelberg.
US Department of Health and Human Services (2018).
What are some types of assistive devices and how
are they used? https://www.nichd.nih.gov/health/
topics/rehabtech/conditioninfo/device. [Online];
accessed August 2020.
Van Hasselt, H., Guez, A., and Silver, D. (2015). Deep
reinforcement learning with double q-learning.
Yin, Z. and Shi, J. (2018). Geonet: Unsupervised learning
of dense depth, optical flow and camera pose. In
Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, pages 1983–1992.
Zhang, Q. and Du, T. (2019). Self-driving scale car trained
by deep reinforcement learning.
ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence
196