and tracking as demonstrated on outdoor real-world
scenes, while the proposed conceptual reasoning con-
tribute to the visual processing, allowing the location
of hidden objects through knowledge induction.
REFERENCES
Albanese, M., Molinaro, C., Persia, F., Picariello, A., and
Subrahmanian, V. S. (2011). Finding unexplained ac-
tivities in video. In Proceedings of the AAAI Inter-
national Joint Conference on Artificial Intelligence,
pages 1628–1634.
Bai, L., Lao, S., Jones, G. J. F., and Smeaton, A. F. (2007).
Video semantic content analysis based on ontology.
In Proceedings of the IEEE International Machine Vi-
sion and Image Processing Conference, pages 117–
124.
Berclaz, J., Fleuret, F., Tueretken, E., and Fua, P. (2011).
Multiple object tracking using K-shortest paths opti-
mization. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 33(9):1806–1819.
Bernardin, K. and Stiefelhagen, R. (2008). Evaluating mul-
tiple object tracking performance: The CLEAR MOT
metrics. EURASIP Journal on Image and Video Pro-
cessing, 2008:1–10.
Bhat, M. and Olszewska, J. I. (2014). DALES: Automated
Tool for Detection, Annotation, Labelling and Seg-
mentation of Multiple Objects in Multi-Camera Video
Streams. In Proceedings of the ACL International
Conference on Computational Linguistics Workshop,
pages 87–94.
Chen, L., Wei, H., and Ferryman, J. (2014). Readin-
gAct RGB-D action dataset and human action recog-
nition from local features. Pattern Recognition Let-
ters, 50:159–169.
Dai, X. and Payandeh, S. (2013). Geometry-based object
association and consistent labeling in multi-camera
surveillance. IEEE Journal on Emerging and Selected
Topics in Circuits and Systems, 3(2):175–184.
Evans, M., Osborne, C. J., and Ferryman, J. (2013). Mul-
ticamera object detection and tracking with object
size estimation. In Proceedings of the IEEE Inter-
national Conference on Advanced Video and Signal
Based Surveillance, pages 177–182.
Ferrari, V., Tuytelaars, T., and Gool, L. V. (2006). Simulta-
neous object recognition and segmentation from sin-
gle or multiple model views. International Journal of
Computer Vision, 67(2):159–188.
Ferryman, J., Hogg, D., Sochman, J., Behera, A.,
Rodriguez-Serrano, J. A., Worgan, S., Li, L., Leung,
V., Evans, M., Cornic, P., Herbin, S., Schlenger, S.,
and Dose, M. (2013). Robust abandoned object detec-
tion integrating wide area visual surveillance and so-
cial context. Pattern Recognition Letters, 34(7):789–
798.
Fleuret, F., Berclaz, J., Lengagne, R., and Fua, P. (2008).
Multicamera people tracking with a probabilistic oc-
cupancy map. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 30(2):267–282.
Gomez-Romero, J., Patricio, M. A., Garcia, J., and Molina,
J. M. (2011). Ontology-based context representation
and reasoning for object tracking and scene interpre-
tation in video. Expert Systems with Applications,
38(6):7494–7510.
Jeong, J.-W., Hong, H.-K., and Lee, D.-H. (2011).
Ontology-based automatic video annotation technique
in smart TV environment. IEEE Transactions on Con-
sumer Electronics, 57(4):1830–1836.
Kasturi, R., Goldgof, D., Soundararajan, P., Manohar, V.,
Garofolo, J., Boonstra, M., Korzhova, V., and Zhang,
J. (2009). Framework for performance evaluation
of face, text, and vehicle detection and tracking in
video: Data, metrics, and protocol. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence,
31(2):319–336.
Lehmann, J., Neumann, B., Bohlken, W., and Hotz, L.
(2014). A robot waiter that predicts events by high-
level scene interpretation. In Proceedings of the In-
ternational Conference on Agents and Artificial Intel-
ligence, pages I.469–I.476.
Mavrinac, A. and Chen, X. (2013). Modeling coverage in
camera networks: A survey. International Journal of
Computer Vision, 101(1):205–226.
Natarajan, P. and Nevatia, R. (2005). EDF: A framework for
semantic annotation of video. In Proceedings of the
IEEE International Conference on Computer Vision
Workshops, page 1876.
Olszewska, J. I. (2012). Multi-target parametric active con-
tours to support ontological domain representation. In
Proceedings of the RFIA Conference, pages 779–784.
Olszewska, J. I. (2013). Multi-scale, multi-feature vector
flow active contours for automatic multiple-face de-
tection. In Proceedings of the International Confer-
ence on Bio-Inspired Systems and Signal Processing.
Olszewska, J. I. (2015). Multi-camera video object recog-
nition using active contours. In Proceedings of the In-
ternational Conference on Bio-Inspired Systems and
Signal Processing, pages 379–384.
Olszewska, J. I. and McCluskey, T. L. (2011). Ontology-
coupled active contours for dynamic video scene un-
derstanding. In Proceedings of the IEEE International
Conference on Intelligent Engineering Systems, pages
369–374.
Park, H.-S. and Cho, S.-B. (2008). A fuzzy rule-based sys-
tem with ontology for summarization of multi-camera
event sequences. In Proceedings of the International
Conference on Artificial Intelligence and Soft Com-
puting. LNCS 5097., pages 850–860.
Remagnino, P., Shihab, A. I., and Jones, G. A. (2004). Dis-
tributed intelligence for multi-camera visual surveil-
lance. Pattern Recognition, 37(4):675–689.
Riboni, D. and Bettini, C. (2011). COSAR: Hybrid reason-
ing for context-aware activity recognition. Personal
and Ubiquitous Computing, 15(3):271–289.
Sridhar, M., Cohn, A. G., and Hogg, D. C. (2010). Un-
supervised learning of event classes from video. In
Proceedings of the AAAI International Conference on
Artificial Intelligence, pages 1631–1638.
ICAART 2016 - 8th International Conference on Agents and Artificial Intelligence
228