
environment, such as a Wireless Local Area Network
(WLAN), data does not need to travel long distances
or pass through multiple servers, as it does in public
cloud infrastructures like AWS. This significantly re-
duces network latency between physical and digital
assets, resulting in faster response time and more ac-
curate real-time control, which are critical for AGV
operations and enhancing the immersion of VR envi-
ronments through a synchronized DT.
The integration of VR with a Dockerized private
cloud plays a significant role in enhancing user inter-
action with the digital asset. VR provides immersive
interaction for the user by simulating its presence in
this digital environment, which is an ideal tool for in-
terfacing with DTs. The choice of Unity as the holder
of the digital asset instead of Gazebo was driven by
several factors. Primarily, Gazebo plugins for deploy-
ing the system to VR headsets have not been updated
to keep pace with the rapid development the industry
has seen in recent years. Additionally, Unity provides
greater flexibility for creating custom environments
and integrating advanced graphic features. Most no-
tably, Unity excels in supporting innovative interac-
tive input interfaces, such as hand gestures. In gen-
eral, the ROS-Unity3D architecture for the simulation
of mobile ground robots proved to be a viable alterna-
tive for ROS-Gazebo and the best option to integrate
VR into the system (Platt and Ricks, 2022). Due to
the mentioned reasons, the common practice among
robotics developers and researchers is to divide the
system into two subsystems: Ubuntu for interaction
with the physical asset and Windows for the VR de-
velopment in Unity.
The system’s latest update involves creating a
Unity scene to host the digital simulation.
Map: The environment was first constructed on
an Ubuntu machine using the slam gmapping node.
The mesh files were then transferred to the Unity
project on a Windows machine, where surfaces and
colliders were added to the walls to enhance realism.
URDF Importer: The Unified Robotics Descrip-
tion Format (URDF) Importer is a Unity robotics
package that enables the import of robot assets de-
fined in the URDF format into Unity scenes. The
imported asset retains its geometry, visual meshes,
kinematic, and dynamic attributes. The Importer
leverages PhysX 4.0 articulation bodies to parse the
URDF file. Using the URDF Importer, the Turtle-
Bot3 OpenMANIPULATOR-X was imported, and
additional objects were added to replace some of the
default actuation scripts, as advised by the package
development team. Customized scripts were imple-
mented to better represent the AGV and its robotic
arm within Unity.
3.1 Robotic Arm Integration
The TurtleBot3 was augmented with a five-degree-
of-freedom robotic arm. The arm is powered by the
TurtleBot3 OpenCR board while its servo motors are
connected to a servo motor controller board for eas-
ier setup. The latter board is connected to the RPi and
communicates with it using the I2C protocol, a simple
serial protocol used for communication between two
devices or chips in an embedded system, to receive
commands and control the arm.
To digitally represent the robotic arm, we uti-
lized the OpenMANIPULATOR-X model, which is
mounted on the TurtleBot3 and imported into Unity.
The arm follows the same three-layer architecture as
the TurtleBot3, with two separate ROS Masters: one
for the Physical Twin and the other for the Digital
Twin. The hardware layer of the Digital Twin is re-
sponsible for collecting the sensory data from the arm,
specifically the joint state data. This data is orga-
nized and sent to the middleware layer, which propa-
gates it to the cloud layer via web sockets. Once the
data reaches the cloud, it is processed and sent back
through the same layers in reverse, eventually reach-
ing the hardware layer of the physical asset to actuate
the robotic arm.
The robotic arm utilized underwent meticulous as-
sembly via piece-by-piece integration, following the
manufacturer’s instructions. During the assembly se-
quence, linkages were established between the com-
ponents. Each connection necessitated proper execu-
tion to ensure the arm’s proper functioning. Given the
study’s specific demands, minor modifications were
implemented on the arm. Despite its six servo motors,
only five were ultimately employed in the experiment
to match the Digital Twin in the degrees of freedom.
Of the accessories initially included, only a select few
were deemed suitable for the task; the arm grabber re-
quired replacement. This modification resulted from
uncertainties regarding its precision in the execution
of object manipulation tasks. The robotic arm was
mounted on an additional surface placed on top of the
TurtleBot3 platform to integrate the robotic arm into
the experimental setup. This position was chosen to
prevent interference with the LIDAR sensor and, at
the same time, to keep the balance of the TurtleBot3.
The robotic arm consists of five servos, each of-
fering a range of 180 degrees. Each servo motor was
strategically placed to enable precise control over the
arm’s movements. Operating particularly, a merger of
such functions:
• Servo Motor 1 facilitates the rotation of the arm
from side to side, serving as the base of its move-
ment.
GRAPP 2025 - 20th International Conference on Computer Graphics Theory and Applications
270