both independent living and also in long-term care
facilities. Integrating SCRs to such smart
environments can improve the quality of life for older
adults. With the use of personalized machine
learning, SCRs can learn the preferences of the older
adults such as preferred temperature, lighting
intensity, and even activate robotic vacuum cleaners
(e.g., Roomba (Tribelhorn & Dodds, 2007) according
the their preferred time.
We proposed an AR-based system for
interactive interfacing with multiple SCRs deployed
in different rooms/homes. More specifically, we
developed a seamless communication between
Microsoft HoloLens 2 and Miro-e robot. This
integration serves as an efficient platform for
controlling the robot for assisting the older adult,
over-ride control in case the robot takes unexpected
actions, monitor their health and daily activities (i.e.,
medication or food intake, exercise), instant
communication for emergency situations and much
more.
For a more natural interactions between the robot
and elder, we developed an improved deep learning
based facial emotion recognition technique for
affective computing. To overcome the limited
computational power of the robot’s computer, we off-
loaded the Deep Learning inference to on the edge
hardware accelerators which opens doors for a wide
range of applications. To achieve this we integrated
Miro-e to Nvidia’s Jetson and successfully performed
our FER algorithms on the edge and minimize the
network latency and privacy/cybersecurity concerns
of alternative options which require cloud and
internet connectivity.
Having a central interactive platform through AR
smart glasses for managing multiple robots and being
able apply state-of-the-art learning algorithms on the
edge is a milestone towards deployment of SCRs in
smart environments to assist older adults. In fact,
combining our proposed AR-based system with
applications of Affective Computing allows for a
more reliable and safer interaction between the SCR
and the older adult.
Our future work will include integration of Miro-
e with smart home devices for advanced personalized
home automation for elder adult. The integration will
focus on the safety, security, lighting and
heating/cooling control, and also mental health of the
older adult. Another research direction is study of
human factors in the interface design and adding
more functionalities for communication, alert and
health analysis to the interface. Another next step is
the field study to assess the usability, acceptance rate,
and benefits of our systems.
ACKNOWLEDGEMENTS
The authors would like to acknowledge the
contribution of the Natural Sciences and Engineering
Research Council (NSERC) of Canada and the
Interactive Intelligent Systems and Computing
research group through which this project is
supported.
REFERENCES
Amanatiadis, E., & Faniadis, A. (2020). Deep Learning
Inference at the Edge for Mobile and Aerial Robotics.
IEEE International Symposium on Safety, Security, and
Rescue Robotics (SSRR) (pp. 334-340). IEEE.
Anjum, T., Lawrence, S.& Shabani, A. (2021). Efficient
Data Augmentation within Deep Learning Framework
to Improve Cross-Dataset Facial Emotion Recognition.
The 25th Int'l Conf on Image Processing, Computer
Vision, & Pattern Recognition. Las Vegas: Springer
Nature.
Chunlei, C., Peng, Z., Huixiang, Z., Jiangyan, D., Yugen,
Y., Huihui, Z., & Zhang, Y. (2020). Deep Learning on
Computational-Resource-Limited Platforms: A Survey.
Mobile Information Systems (pp. 1-19). Hindawi .
Forest, A., Shabani, A. (2017). A Novel Approach in
Smart Ventilation Using Wirelessly Controllable
Dampers. Canadian Conference on Electrical and
Computer Eng. Windsor, Canada: IEEE.
Feng, Y., Barakova, E., Yu, S., Hu, J., & Rauterberg, G.
(2020). Effects of the Level of Interactivity of a Social
Robot and the Response of the Augmented Reality
Display in Contextual Interactions of People with
Dementia. Sensors, (pp. 20(13), 3771).
Ghaeminia, M.H., Shabani, A.H., & Shokouhi, S.B. (2010)/
Adaptive motion model for human tracking using
Particle Filter. International Conference on Pattern
Recognition, Istanbul, Turkey.
Howard, A., Menglong, Z., Chen, B., Kalenichenko, D.,
Wang, W., Weyand, T., Adam, H. (2017). Mobilenets:
Efficient convolutional neural networks for mobile
vision applications. arXiv preprint arXiv:1704.04861.
Ionut, A., Tudor, C., Moldovan, D., Antal, M., Pop, C. D.,
Salomie, I., Chifu, V. R. (2020). Smart Environments
and Social Robots for Age-Friendly Integrated Care
Services. International Journal of Environmental
Research and Public Health.
Keven, T. K., Domenico, P., Federica, S., & Philip, W.
(2018). Key challenges for developing a Socially
Assistive Robotic (SAR) solution for the health sector.
23rd International Workshop on Computer Aided
Modeling and Design of Communication Links and
Networks (CAMAD). Romee, Italy: IEEE.
Lawrence, S., Anjum, T., & Shabani, A. (2021). Improved
Deep Convolutional Neural Network with Age
Augmentation for Facial Emotion Recognition in