and to retrieve existing data from the database un-
der request of an event dispatcher agent. The his-
tory database is composed of a collection of entry el-
ements. Each entry consists in a three elements tuple
(SI
i
, V
s
i
, Moment), such that SI
i
is the identifier of the
sensor interpreter which provided the data, V
s
i
is the
set of variables extracted from sensor interpreter SI
i
,
and Moment is the time and date when such informa-
tion was retrieved by the system.
Once an event dispatcher agent has completed the
chunk of data that comprises an environment view, it
sends a message containing such data through the en-
vironment views distribution channel. This channel
supports a broadcast-like communication mode, such
that every normality analysis component must sub-
scribe to it and they can just receive those messages
that are sent to them. This approach has as its main
advantage the fact that only one communication point
is needed to be known by both, the event dispatcher
agents and the normality analysis components.
4 CONCLUSIONS
The trend in the design and development of intelligent
surveillance systems is to use not only the visual in-
formation provided by a set of video cameras, but also
to use other kinds of sensors to allow the system to
handle a more accurate knowledge of the monitored
environment. In this respect, the fusion of sensory
data plays an essential role as different sensors pro-
vide data in a variety of different forms. Besides, the
intelligent surveillance based on normality analysis
requires that the same sensory data can is used in dif-
ferent analysis contexts with different semantics. We
have presented an architectural layer to fuse and pro-
vide with semantics sensory data according to the par-
ticular requirements of the normality analysis compo-
nents which require it. We have defined the concept of
environment view as an object that contains the data
requested for a normality analysis component. Be-
sides, the proposed architecture allows for the easy
scalability in terms of both, the sensors installed in
the environment and the normality analysis compo-
nents plugged-in to the system. Adding a new kind
of sensor entails designing a new sensor interpreter
capable of interpreting the data sent by such kind of
sensor and leaving it available to the rest of compo-
nents of the system. On the other hand, adding a new
normality analysis component entails to define a new
environment view, according to the requirements of
data and semantics of the newly added component.
ACKNOWLEDGEMENTS
This work has been founded by the Regional Gov-
ernment of Castilla-La Mancha under Research
Project e-PACTOS (ref. PAC-06-141), Sarasvati (ref.
PBC06-0064) and the Spanish Ministry of Education
and Science under the Research Project TIN2007-
62568.
REFERENCES
D. L. Hall and J. Llinas (1997). An introduction to multisen-
sor data fusion. Proceedings of the IEEE, 85(1):6–23.
D. Smith and S. Singh (2006). Approaches to Multisen-
sor Data Fusion in Target Tracking: A Survey. IEEE
Transactions on Knowledge and Data Engineering,
18(12):1696–1710.
H. B. Mitchell (2007). Multi-Sensor Data Fusion: An In-
troduction. Springer-Verlag.
J. Albusac, D. Vallejo, L. Jimenez, J. J. Castro-Schez, and
L. Rodriguez (2008). Intelligent Surveillance based
on Normality Analysis to Detect Abnormal Behaviors.
Submitted to International Journal of Pattern Recog-
nition and Artificial Intelligence.
J. J. Castro-Schez, J. L. Castro, and J. M. Zurita (2004a).
Fuzzy repertory table: a method for acquiring knowl-
edge about input variables to machine learning al-
gorithm. IEEE Transactions on Fuzzy Systems,
12(1):123–139.
J. J. Castro-Schez, N. R. Jennings, X. Luo, and N. Shadbolt
(2004b). Acquiring domain knowledge for negotiat-
ing agents: a case of study. International Journal of
Human-Computer Studies, 61(1):3–31.
L. Rodriguez-Benitez, J. Moreno-Garcia, and J. J. Castro-
Schez (2008). Automatic Object Behaviour Recogni-
tion from Compressed Video Domain. Image Vision
and Computing. doi: 10.1016/j.imavis.2008.07.002.
M. Valera and S. A. Velastin (2005). Intelligent Distributed
Surveillance Systems: A Review. IEEE Proceedings
on Vision, Image, and Signal Processing, 152(2):192–
204.
P. Remagnino, A. I. Shihab, and G. A. Jones (2004). Dis-
tributed intelligence for multi-camera visual surveil-
lance. Pattern Recognition, 37(4):675–689.
R. C. Luo, C. Yih, and K. L. Su (2002). Multisensor Fu-
sion and Integration: Approaches, Applications, and
Future Research Directions. IEEE Sensors Journal,
2(2):107–119.
R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Dug-
gins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa,
P. Burt, and L. Wixson (2000). A System for Video
Surveillance and Monitoring. Technical report, The
Robotics Institute, Carnegie Mellon University.
W. Hu, T. Tan, and L. Wang (2004). A survey on visual
surveillance of object motion and behaviors. IEEE
Transactions on Systems, Man, and Cybernetics, Part
C: Applications and Reviews, 34(3):334–352.
ICAART 2009 - International Conference on Agents and Artificial Intelligence
166