quire an ordering mechanism to ensure the correct submission of multicast messages
(see, e.g., [9]). A choreography engine multicasts its entire allocation history (multiple
task allocation records) to all choreography engines that control constrained tasks. In
this scheme, fewer messages are sent (one history and possibly a confirmation mes-
sage) but the payload increases (task allocation history of an entire choreography en-
gine). Also the impact of omission failures increases as the delivery of a history may
be more time-critical. Moreover, we require an ordering mechanism to ensure the cor-
rect submission of multicast messages. A Cumulative History Push scheme requires
the smallest number of messages to be sent. The history message contains all previous
task allocation records of the respective process. Because as single history is passed
between the choreography engines, its delivery is still more time-critical. However, as
multicasting is not necessary, we do not need to implement an ordering mechanism. In
the Task-Based History Pull scheme, the respective choreography engine has to request
the allocation history of the constraining task(s) before allocating a constrained task.
Similar to task-based history push the message size is small (a request respectively
a response consisting of a single task-allocation record). As the allocation of a con-
strained task heavily depends on the communication between choreography engines, an
omission failure may have significant effects. However, as multicasting is not necessary
there is no need to implement an ordering mechanism. In a Engine-Based History Pull
scheme, each choreography engine requests engine-based allocation histories when al-
locating its first constrained task. Similar to engine-based history push, the number of
messages decreases but their size increases compared to task-based history exchange.
Omission failures may delay task allocation for the allocation of the first constrained
task. However, as multicasting is not necessary there is no need to implement an or-
dering mechanism. In an Orchestration Engine the entire business process history is
maintained locally but it has to communicate with the different remote services. As
there is no need to exchange a history, the messages are allocation requests of small
size (a single request and a respective confirmation for each task to be allocated). An
orchestration engine architecture is most impacted by omission failures. In case the or-
chestration engine suffers a crash, the execution of the entire business process freezes.
Each crashed domain, hosting a task to be allocated next, also stops at least a part of the
business process from working. However, as multicasting is not necessary there is no
need to implement an ordering mechanism. In addition, the following three interrelated
determinants have to be considered in order to choose a proper process engine architec-
ture: the number of constrained tasks per business process (degree of constraint; DOC),
the number of participants in the business process (degree of distribution; DOD) and
the number of business process control transitions between different participants in a
business process instance (degree of networking; DON). According to these character-
istics and the corresponding performance categories, we can choose the approach that
best fits a particular SOA. For example, a business process with a high DOC, a high
DOD, and a high DON may best be handled with a choreography engines architecture
using an engine-based history push approach with confirmation. On the other hand, if
our focus is on minimized size of messages and minimal costs for implementing an
ordering mechanism, an orchestration engine architecture may be a better choice.
41