of the simulation. Here we used the datalogger in the
plant again, but in context of a real-time analysis. The
network traffic (profinet frames) are analyzed and af-
ter each change of a signal our anomaly detection tool
gets a message with the signal, its value and the times-
tamp.
In some first experiments we inserted 17 different
failures. Using the algorithm from figure 7, we were
able to detect 88% of the failures correctly. In the
remaining 12% we were able to detect the error, but
the error cause wasn’t identified correctly.
Although we were able to detect most of the er-
rors (at least the failures which were enforced by our-
selves), we encountered a problem: Sometimes a cor-
rect behavior was recognized as an error. This hap-
pens because we are not able to learn the completely
correct behavior model. For this we would need an
infinite number of recorded test samples. To prevent
this, it is possible to enrich the recorded observations
e.g. by using a normal distribution and create addi-
tional samples. Another possibility is to adapt the
model during runtime. For this we would need a su-
pervised learning algorithm which allows the plant
operator to add a path to the model. This issue is not
yet solved and should be addressed in future work.
6 CONCLUSIONS AND FUTURE
WORK
In this paper we presented an efficient method for
anomaly detection based on behavior models avail-
able as finite state machines/ timed automata. In con-
trast to usual approaches these models are learned
automatically by observing the running plant. We
presented an appropriate algorithm for learning this
model as timed automaton. Our learning process
comprises the learning of the parallelism structure (in-
cluding the plant topology). Finally we learn the be-
havior model in the formalism of timed automata for
each component.
The overall model is used for anomaly detection.
We showedthe differenttypes of anomalies which can
be detected using this approach and validated the us-
ability of this approach by giving some first experi-
mental results.
During the experiments we encountered the prob-
lem, that a model cannot be learned with accuracy
of 100%. To reach this, we would need an infinite
number of test samples. This means that in practice
sometimes a regular behavior is diagnosed as a fail-
ure. In future work the learned model should be en-
riched with empirical data or adapted during runtime.
In further work hybrid automata should be taken
into consideration. This will expand the expressive-
ness and the ability of finding an error reliably. Until
now there exists no appropriate learning algorithm for
the learning of hybrid automata.
REFERENCES
Angluin, D. (1987). Learning regular sets from queries and
counterexamples. Inf. Comp., pages 75(2):87–106.
AutomationML (2010). www.automationml.org.
Cabasino, M. P., Giua, A., and Seatzu, C. (2007). Identifi-
cation of petri nets from knowledge of their language.
Discrete Event Dynamic Systems, 17:447–474.
Carrasco, R. C. and Oncina, J. (1999). Learning determinis-
tic regular grammars from stochastic samples in poly-
nomial time. In RAIRO (Theoretical Informatics and
Applications), page 33(1):120.
Hashtrudi Zad, S., Kwong, R., and Wonham, W. (2003).
Fault diagnosis in discrete-event systems: framework
and model reduction. Automatic Control, IEEE Trans-
actions on, 48(7):1199 – 1212.
Kumar, B., Niggemann, O., and Jasperneite, J. (2010). Sta-
tistical models of network traffic. In International
Conference on Computer, Electrical and Systems Sci-
ence,. Cape Town, South Africa.
Lunze, J., Schr¨oder, J., and Supavatanakul, P. (2001). Di-
agnosis of discrete event systems: the method and an
example. In Proceedings of the Workshop on Princi-
ples of Diagnosis, DX’01, pages 111–118, Via Lattea,
Italy.
Sampath, M., Sengupta, R., Lafortune, S., Sinnamohideen,
K., and Teneketzis, D. (1994). Diagnosability of dis-
crete event systems. In 11th International Confer-
ence on Analysis and Optimization of Systems Dis-
crete Event Systems, volume 199 of Lecture Notes
in Control and Information Sciences, pages 73–79.
Springer Berlin / Heidelberg.
Struss, P. and Ertl, B. (2009). Diagnosis of bottling
plants - first success and challenges. In 20th Inter-
national Workshop on Principles of Diagnosis, Stock-
holm, Stockholm, Sweden.
Supavatanakul, P., Lunze, J., Puig, V., and Quevedo, J.
(2006). Diagnosis of timed automata: Theory and
application to the damadics actuator benchmark prob-
lem. Control Engineering Practice, 14(6):609–619.
Thollard, F., Dupont, P., and de la Higuera, C. (2000).
Probabilistic dfa inference using kullback-leibler di-
vergence and minimality. In Proc. 17th International
Conf. on Machine Learning, pages 975–982. Morgan
Kaufmann.
Tripakis, S. (2002). Fault diagnosis for timed automata. In
FTRTFT, pages 205–224.
Verwer, S. (2010). Efficient Identification of Timed Au-
tomata: Theory and Practice. PhD thesis, Delft Uni-
versity of Technology.
ANOMALY DETECTION IN PRODUCTION PLANTS USING TIMED AUTOMATA - Automated Learning of Models
from Observations
369