drove precisely well in an unseen environment with
different textures and surroundings. A sample image
of the seen and unseen test area is shown in figure 13
and figure 14, respectively.
It is impossible to alter weather conditions or de-
fine pedestrian behavior in real world. The main fo-
cus of driving in a real environment was on the trained
path with people obstructing the view and driving in
an unseen environment.
6 CONCLUSIONS
This paper identifies factors affecting end-to-end driv-
ing for pedestrian zones. Initially, the work is done
in the simulation, later it is transferred to a real sys-
tem. A CNN network is designed to provide steer-
ing angles of a vehicle using RGB images from a
camera mounted on the roof of a minibus. The sys-
tem is tested in simulation with different weather
conditions and pedestrian locations. From the re-
sults, it can be seen that the end-to-end system pre-
dicts well in the driven path with different classes of
weather. If trained well for a particular environment
it shows propitious results, but relying alone on this
system for driving the vehicle is still not proposed;
it is not known when the system goes into a fail-
ure state. Overall, the reason behind the failure was
strong shadows. Also, the presence of a crowd made
the vehicle slightly steer. In future work, it is planned
to include depth images with extra output for handling
the shadows and intersections, respectively.
REFERENCES
Argall, B. D., Chernova, S., Veloso, M., and Browning, B.
(2009). A survey of robot learning from demonstra-
tion. Robotics and autonomous systems , 57(5):469–
483.
Bojarski, M., Del Testa, D., Dworakowski, D., Firner,
B., Flepp, B., Goyal, P., Jackel, L. D., Monfort,
M., Muller, U., Zhang, J., et al. (2016). End to
end learning for self-driving cars. arXiv preprint
arXiv:1604.07316.
Chang, T.-H., Wang, L.-S., and Chang, F.-R. (2009). A so-
lution to the ill-conditioned gps positioning problem
in an urban environment. IEEE Transactions on Intel-
ligent Transportation Systems, 10(1):135–145.
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., and
Adam, H. (2018). Encoder-decoder with atrous sepa-
rable convolution for semantic image segmentation. In
Proceedings of the European conference on computer
vision (ECCV), pages 801–818.
He, K., Gkioxari, G., Doll
´
ar, P., and Girshick, R. (2017).
Mask r-cnn. proceedings of the ieee international con-
ference on computer vision. URL: http://openaccess.
thecvf. com/content ICCV 2017/papers/He Mask R-
CNN ICCV 2017 paper. pdf.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Hubschneider, C., Bauer, A., Doll, J., Weber, M., Klemm,
S., Kuhnt, F., and Z
¨
ollner, J. M. (2017). Inte-
grating end-to-end learned steering into probabilistic
autonomous driving. In 2017 IEEE 20th Interna-
tional Conference on Intelligent Transportation Sys-
tems (ITSC), pages 1–7. IEEE.
Jan, Q. H. and Berns, K. (2021). Safety-configuration of
autonomous bus in pedestrian zone. In VEHITS, pages
698–705.
Jan, Q. H., Kleen, J. M. A., and Berns, K. (2020a). Self-
aware pedestrians modeling for testing autonomous
vehicles in simulation. In VEHITS, pages 577–584.
Jan, Q. H., Kleen, J. M. A., and Berns, K. (2021). Sim-
ulated pedestrian modelling for reliable testing of au-
tonomous vehicle in pedestrian zones. In Smart Cities,
Green Technologies, and Intelligent Transport Sys-
tems: 9th International Conference, SMARTGREENS
2020, and 6th International Conference, VEHITS
2020, Prague, Czech Republic, May 2-4, 2020, Re-
vised Selected Papers 9, pages 290–307. Springer.
Jan, Q. H., Klein, S., and Berns, K. (2020b). Safe and
efficient navigation of an autonomous shuttle in a
pedestrian zone. In Advances in Service and Indus-
trial Robotics: Proceedings of the 28th International
Conference on Robotics in Alpe-Adria-Danube Re-
gion (RAAD 2019) 28, pages 267–274. Springer.
Lee, M.-j. and Ha, Y.-g. (2020). Autonomous driving con-
trol using end-to-end deep learning. In 2020 IEEE In-
ternational Conference on Big Data and Smart Com-
puting (BigComp), pages 470–473. IEEE.
Levinson, J., Askeland, J., Becker, J., Dolson, J., Held, D.,
Kammel, S., Kolter, J. Z., Langer, D., Pink, O., Pratt,
V., et al. (2011). Towards fully autonomous driving:
Systems and algorithms. In 2011 IEEE intelligent ve-
hicles symposium (IV), pages 163–168. IEEE.
Manaswi, N. (2018). Understanding and working with
keras deep learning with applications using python.
apress, berkeley.
Pomerleau, D. A. (1988). Alvinn: An autonomous land
vehicle in a neural network. Advances in neural infor-
mation processing systems, 1.
Sutton, R. S., Barto, A. G., et al. (1998). Introduction to
reinforcement learning, volume 135. MIT press Cam-
bridge.
Wolf, P., Groll, T., Hemer, S., and Berns, K. (2020). Evolu-
tion of robotic simulators: Using ue 4 to enable real-
world quality testing of complex autonomous robots
in unstructured environments. In SIMULTECH, pages
271–278.
Yurtsever, E., Lambert, J., Carballo, A., and Takeda, K.
(2020). A survey of autonomous driving: Common
practices and emerging technologies. IEEE access,
8:58443–58469.
ICINCO 2023 - 20th International Conference on Informatics in Control, Automation and Robotics
502