terns and consequently situations of interest in the en-
vironment in which the user is involved. The model
proposed by Machado (2017) aims to manage an en-
vironment in which situations occur. Such situations
generate events and the detection of patterns related to
these events characterizes situations of interest, which
can be ignored or not. In case of an unwanted situa-
tion, the model performs actions to control the envi-
ronment and to avoid damage related to the monitored
context.
It is worth mentioning that many organizations
find difficulties in monitoring and controlling their
processes at run time, by the fact that they produced
information (logs), based on the execution of their
processes, and do not know such as to best use this in-
formation. Influencing the final result of the product
or service provided. This work presents a model to
monitor and control the organizational environment,
in order to avoid situations that negatively impact
business processes. The proposed model is an exten-
sion of the model of proactive actions proposed by
Machado (2017), with characteristics of the environ-
ment being added to the model so that it can be con-
trolled, minimizing the occurrence of undesired situa-
tions. Taking into account the times and performance
in the execution of process in the business environ-
ment, being able to act proactively or reactive in the
face of environmental situations, if necessary.
The article is structured as follows: In Section 2,
we present the main concepts found in the literature.
The model developed in this work is presented in sec-
tion 3. Section 4 presents a case study using the model
and is followed by section 5 where results and discus-
sions are presented. Finally, in section 6, we draw
some conclusions and indicate points for future work.
2 BACKGROUND AND RELATED
WORK
This section describes concepts that serve as a the-
oretical basis for the development of the work, such
as: Process Mining, Log, Control Charts and Model
to Proactive Identification of Situations of Interest,
besides Related Work. According to Van Der Aalst
(2011), process mining aims to extract knowledge
from data generated by the execution of processes in
information systems used by organizations. It seeks
to discover, monitor and improve existing real pro-
cesses. Process mining emerged in the 1990s, with
the first works focusing on mining process models
in software engineering event logs (Cook and Wolf,
1995).
Process mining is an area of research positioned
between the areas of artificial intelligence, data min-
ing, process modeling and analysis; therefore, the
growing interest in the area can be justified by the
fact that more and more events are recorded. (Van
Der Aalst, 2016) (Burattin, 2013). The increasing
volume of digital information related to processes in
organizations allows the registration and analysis of
their events. Any step or operation of a process or sys-
tem can be seen as an event. (Van Der Aalst, 2012a).
In other words, process mining is important and
efficient, as it is capable of converting historical infor-
mation (logs), related to a process, enabling the orga-
nizations specialists to view, monitor and control what
is really happening in the execution of processes. An
event log consists of the record of events that occurred
during the functioning of an organization, and the
storage of this record is carried out mainly by infor-
mation systems (Van Der Aalst, 2011)(Glavan, 2011).
Events are considered as a tuple containing the fol-
lowing fields (Van Der Aalst, 2011): (i) ID: identifi-
cation; (ii) Timestamp: activity start date and time;
(iii) Activity: Activity description; (iv) Resource: re-
sponsible for the activity; (v) Cost: activity cost; (vi)
Extra data: additional information about an activity.
The log data can be located in a distributed and in-
complete way, inconsistent with the reality and con-
taining outliers (noise). Additionally to quiet activ-
ities that are performed in the company and depend
on third parties, therefore not being registered in the
system (Van Der Aalst, 2012b).
In order to measure the parameters provided by
the logs, control charts can be used. Therefore, it is
possible to define upper and lower limits, so that met-
rics and can identify the occurrence of anomalies in
the stored records. In addition to monitoring and con-
trolling the actions involving these records. Accord-
ing to Oliveira (2013), control charts are tools used to
monitor the performance of a process, based on char-
acteristics that they call control limits. These limits
are known as (i) upper line (upper control limit - LSC)
and (ii) lower line (lower control limit - LIC), in ad-
dition to (iii) central line (central limit - LC).
According to Oliveira (2013), when all the sam-
ple points are within the limits of control, it is con-
sidered that the process is ”under control”. How-
ever, if one (or more) points are positioned outside
the imposed control limits, there is evidence that the
process is ”out of control” and that an investigation
into the occurrences and corrective actions are needed
to detect and eliminate special causes in the process.
Therefore, after sample analysis, it is possible to de-
fine whether a situation in the environment is of inter-
est, that is, characterizes an uncontrolled environment
(Machado, 2017).
ICEIS 2020 - 22nd International Conference on Enterprise Information Systems
810