Figure 12: No obstacles in the way of the user, the system
tels the user they can go straight.
5 CONCLUSIONS
The implemented system assists visually impaired in-
dividuals in perceiving their surroundings through the
following three modes of operation: object detection,
color detection, and product detection in stores. Thus,
the system can identify an object or a color specified
by the user or analyze the environment, providing the
user with information about the present objects or the
predominant color. When the user is in a store, the
system can detect the section of products located on
the shelves in front of them, helping them to navigate
more easily and find the necessary products. Addi-
tionally, since navigating without assistance is a real
challenge for visually impaired individuals, the sys-
tem provides support for obstacle avoidance in both
outdoor and indoor environments and can also offer a
summary of the route the user wishes to take, includ-
ing information about streets and public transporta-
tion.
To test the system, real scenarios were used, which
simulated different situations in which visually im-
paired people might encounter problems, and all these
tests obtained encouraging results.
In the future, improvements may be made to the
presented system. The most important future develop-
ment involves making the system portable by power-
ing the Nvidia development board with a battery and
adding a Wi-Fi module to not depend on the Ethernet
cable for Internet connection. This connection is re-
quired for the Google Text To Speech Python library
to work.
Another further development involves the addition
of a GPS module to provide the route summary mode
with information about the user’s location in real time.
In this way, the system can guide the user in real time
to move to the desired destination.
Also, for better system accuracy, it is necessary to
retrain the SSD model for in-store product recognition
using a larger image set containing a larger variety of
product packaging.
Last but not least, in the future the algorithm that
calculates the distance to the nearest obstacle can be
improved to differentiate the floor from objects. This
would improve the accuracy of the calculation, pro-
viding more security to users.
We also envision testing the system as a whole
with volunteers in real usage conditions.
ACKNOWLEDGEMENTS
The research work related to training, fine tuning and
testing some of the deep learning models involved
by this paper has been partially supported by the
CLOUDUT Project, cofunded by the European Fund
of Regional Development through the Competitive-
ness Operational Programme 2014-2020, contract no.
235/2020.
REFERENCES
Aleksandrov, A. (2020). Groceries object detection
dataset. https://github.com/aleksandar-aleksandrov/
groceries-object-detection-dataset.
Baskar, V. V., Ghosh, I., Karthikeyan, S., Hemalatha, R. J.,
and Thamizhvani, T. R. (2021). An indoor obsta-
cle detector to assist the visually impaired person on
real-time with a navigator. In 2021 International
Conference on Computational Performance Evalua-
tion (ComPE), pages 136–141.
CS Kumar, A., Bhandarkar, S. M., and Prasad, M. (2018).
Depthnet: A recurrent neural network architecture for
monocular depth prediction. In Proceedings of the
IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) Workshops.
Devipriya, D., Sri, V. S., and Mamatha, I. (2018). Smart
store assistor for visually impaired. In 2018 Interna-
tional Conference on Advances in Computing, Com-
munications and Informatics (ICACCI), pages 1038–
1045.
Jund, P., Abdo, N., Eitel, A., and Burgard, W. (2016). The
freiburg groceries dataset. abs/1611.05799.
K, A. F., R, A., S, H., A, S., and R, V. (2014). Develop-
ment of shopping assistant using extraction of text im-
ages for visually impaired. In 2014 Sixth International
Conference on Advanced Computing (ICoAC), pages
66–71.
Kabir, M. S., Karishma Naaz, S., Kabir, M. T., and Hus-
sain, M. S. (2023). Blind assistance: Object detec-
tion with voice feedback. In 2023 26th International
Conference on Computer and Information Technology
(ICCIT), pages 1–5.
Kumar, N. and Jain, A. (2021). Smart navigation detection
using deep-learning for visually impaired person. In
2021 IEEE 2nd International Conference On Electri-
cal Power and Energy Systems (ICEPES), pages 1–5.
Pesudovs, K., Lansingh, V.C., Kempen, J.H. (2024). Global
estimates on the number of people blind or visually
impaired by cataract: a meta-analysis from 2000 to
2020. 1. Accessed: 2024-06-12.
ICINCO 2024 - 21st International Conference on Informatics in Control, Automation and Robotics
64