devices and let them make some of the business deci-
sions and actuate on the environment without having
to exchange information with the execution engine.
This approach offers two benefits: it reduces the num-
ber of exchanged messages, increasing battery lifes-
pan, and alleviates the execution engine by decentral-
ising part of the execution of the process to the edges
of the network (the IoT devices). Of course this extra
execution will consume power of the IoT device, but
the energy consumption of the migrated tasks is much
smaller than that used on communication.
Decomposition breaks a system into progressively
smaller subsystems that implement fragments of the
domain problem. Decomposition is also a means to
model large systems and to facilitate the reuse of par-
tial models because it reduces coupling, helps in con-
trolling specification complexity, and allows for mul-
tiple levels of detail to coexist. Consequently, models
become easier to understand that, in turn, facilitates
their validation, redesign, and optimisation (Caetano
et al., 2010).
Despite the benefits of decentralisation, business
processes are still defined following a centralised ap-
proach. Our solution automatically decomposes busi-
ness processes, transforming the original process into
one that takes advantage of the computational re-
sources that IoT devices make available. We begin by
generating a graph from a BPMN model that captures
the control and data flow dependencies of the tasks.
Then, we identify paths that only contain BPMN el-
ements capable of being executed in IoT devices and
that can be transferred. Then, based on these paths,
we check which ones fall within the patterns that we
have identified, and apply the corresponding transfor-
mation procedure. Finally, we redesign the BPMN
model with the achieved solution.
The decomposition procedure is based on parti-
tion techniques, which group together activities in
sub-processes and assign them to separate partici-
pants. These techniques were previously applied to
generic workflow models with simplified approaches
that omit data dependencies (Sadiq et al., 2006) or re-
quire designers to define the possible execution loca-
tion of activities (Fdhila et al., 2009; Xue et al., 2018).
We are using BPMN diagrams to determine data and
control dependencies, as well as execution location of
activities. This proposal is a step forward from our
previous work (Domingos et al., 2014) as it reduces
the number of steps of the algorithm and improves
the way it identifies the tasks to transfer to IoT de-
vices. Beyond reducing centralised execution of busi-
ness processes, this solution also reduces the number
of communications performed with IoT devices.
This paper is organised as follows: the next sec-
tion presents the related work; Section 3 presents our
approach to process decomposition illustrated via a
case study; Section 4 shows the developed prototype,
and the last section concludes the paper and discusses
future work.
2 RELATED WORK
The Mentor project presents one of the first proposals
on process decomposition (Wodtke et al., 1996). To
model processes, the authors use state diagrams and
partition processes taking into account control and
data flows.
The growing use of the Web Services Business
Process Execution Language (WS-BPEL) (OASIS,
2007) justified the development of decomposition
proposals of such processes. Nanda et al. (2004) cre-
ate new subprocesses for each service of the original
process, which communicate directly, reducing the
synchronisation and message exchange that the cen-
tral process execution engine has to perform. How-
ever, these authors do not consider the possibility of
grouping services in the same subprocess.
Sadiq et al. (2006) and Fdhila et al. (2009) decom-
pose generic process models by grouping various ac-
tivities into subprocesses and by distributing the exe-
cution of these subprocesses among different execu-
tion engines. The former only considers control flow
dependencies, while the latter also considers data flow
dependencies. A subsequent work by Fdhila et al.
(2014) optimise subprocesses composition taking into
account several quality of service parameters, such as
cost, time, reliability, and availability. Yu et al. (2016)
use genetic programming to create partitions (subpro-
cesses) of WS-BPEL processes that intensively use
data.
To make use of the advantages offered by the
cloud to execute fragments of business processes,
Duipmans et al. (2012) divide business processes into
two categories: those that run locally and those that
can run in the cloud. With this division, the authors
intend to perform the most computationally intensive
tasks in the cloud, whose data is not confidential. The
identification of these tasks is performed manually.
Povoa et al. (2014) propose a semi-automatic mecha-
nism to determine the location of activities and their
data based on confidentiality policies, monetary costs,
and performance metrics. Hoenisch et al. (2016) opti-
mise the distribution of activities taking into account
some additional parameters such as the cost associ-
ated with delays in the execution of activities and the
unused, but paid, time of cloud resources.
In (Domingos et al., 2014) we present a prelimi-
Automatic Decomposition of IoT Aware Business Processes with Data and Control Flow Distribution
517