The configuration manager enables users to express
their goals (e.g., visualizing based on the user’s lo-
cation and within a specified radius). Essentially, a
user configures spatial boundaries (i.e., a building or
a neighborhood), and the amount of detail desired to
represent the virtual world. The monitoring compo-
nent is responsible for observing internal hardware
resource states: i) internal hardware resources and
their utilization and ii) quality of the communication
link (i.e., status, latency, and bandwidth). The work-
load processing component is responsible for execut-
ing procedural geometry workloads (i.e., workloads
can be packaged into software containers within the
overall service-based architecture (Dustdar and Mur-
turi, 2021)). As illustrated in Figure 3, the process
starts (1) when a user expresses her goal via the edge
application. Then, the request with hardware infor-
mation is forwarded (2-3) to the adaptation manager
which interprets the goal and decides where to exe-
cute the workload.
The adaptation management component is respon-
sible for identifying devices needed to achieve a user
goal with the lowest possible latency. If the goal is
achievable on the user’s device, it forwards the re-
quired data to the host. If the goal is not achievable
locally, the adaptation component attempts to gener-
ate a deployment plan that maps the workload to other
available devices. As illustrated in Figure 3, procedu-
ral generation may occur in the cloud, fog, and edge
devices. To generate valid deployment plans, adapta-
tion must consider several factors such as device hard-
ware requirements, network metrics, and the time re-
quired to transfer (un)processed geometry. To gener-
ate optimal plans, the adaptation component requires
fine-grained information of the infrastructure (4).
The cloud part has a supportive role, which in-
cludes procedural geometry configuration manage-
ment, data storage (e.g., 3D models), resource man-
agement, geometry workload generation, and overall
orchestration. As illustrated in Figure 3, the user’s
request can be forwarded (5) directly to the cloud
as well if no other solution is feasible. The re-
source management component comprises a set of
functionalities from resource discovery (i.e., discov-
ering available edge devices) to context monitoring
(i.e., monitoring hardware infrastructure and updating
its status when changes occur). Orchestration entails
where the software components must be placed, aim-
ing for reliable and low-latency service to end users.
Recent developments in IoT-based systems have
shown that systems can be engineered, deployed, and
executed in Edge-Cloud infrastructures (Alkhabbas
et al., 2020). At the same time, software components
can easily self-adapt to dynamic changes in their de-
ployment topologies when the quality of their services
is degraded (Brogi et al., 2020). Finally, as shown
in Figure 3, software components can be placed on
different devices yielding different deployment con-
figurations. More specifically, software components
that face high requests from a particular region can
be placed in proximity to the end-users. For in-
stance, if the procedural geometry generation for a
particular city area occurs mostly on the user devices,
then the orchestration mechanism must instantiate the
data storage component with associated data (i.e., 3D
models) on the nearest fog devices to the users. As a
result, data can be forwarded faster to the end-users
from the edge layer rather than from the cloud via
WAN connection.
5 AN EMERGING RESEARCH
AGENDA
Satisfying the dynamic and stringent requirements of
contemporary applications such as those in AR/VR is
challenging for centralized cloud-based systems. Pro-
cessing 3D models and transferring vast amounts of
data to user-facing devices over the internet incur la-
tencies and result in user experience degradation. We
discussed aspects emerging from latency and compu-
tation requirements and how edge architectures can
address the requirements and support procedural ge-
ometry workloads. Thus, we sketched an architecture
capable of provisioning such workloads in edge com-
puting scenarios.
As future work, we aim at providing a complete
technical framework for the processing of geome-
try workloads on edge-based architectures; this in-
cludes both technical and architectural aspects. En-
capsulating procedural generation appropriately such
as it being able to execute on heterogeneous hardware
platforms is a challenging task, as such workloads
are required to take advantage of specialized hard-
ware (such as GPUs) when available, yielding dif-
ferent configurations. Subsequently, our vision en-
tails them to be containerized, such that a service-
based architecture emerges across the device-to-cloud
continuum. Performance aspects of different geome-
try workloads executed on state-of-the-art resource-
constrained and powerful devices need to be care-
fully considered. Besides that, assessing deployment
tradeoffs in terms of quality, performance, and cost
is highly desired. Regarding deployment, the edge
topology may not be static, and components may need
to be scaled or migrated to comply with other con-
straints like energy, latency, or device movement, in-
troducing dynamicity. Finally, we identify three main
WEBIST 2021 - 17th International Conference on Web Information Systems and Technologies
358