The detection phase of the algorithm follows basi-
cally the procedure illustrated for the validation of the
normality space: Each newly acquired image frame
I
t
is pre-processed according to the same region de-
composition of the learning phase, and reshaped into
the image vectors {y
t,i
}, where the i index spans the
region set. Then, each vector y
t,i
is projected onto the
correspondent
ˆ
U
i
to determine whether and at which
degree it is included in span{B
i
}: That is, the norm
of the projection error is compared with the modified
threshold value
ˆ
T
σ
i
.
An exhaustivecampaign of simulations and exper-
iments have been performed, regarding the detection
of anomalous events in heterogeneous environment:
In particular, we focused on the analysis of traffic flow
video sequences, and the detection of people in out-
door environments in presence of wind acting on nat-
ural objects such as trees and bushes, and change of
light. The algorithm has also been implemented and
tested in real life situation, such as the task of moni-
toring the behavior of people at a fair, with consistent
results.
In addition, it is remarkable how the Event Detec-
tor has proven robust to use in common situations as
the detection of forgotten objects, and prohibited di-
rections in crowd flow, because by studying the nor-
mality of the scene, the algorithm implicitly includes
these events. Nonetheless, there is currently under de-
velopment a dedicated tool that implements a set of
rules to specifically manage these situations.
4 CONCLUSIONS
In this paper we present a novel approach to event
detection by resorting to the SVD technique. The
core contribution is the capability of building a vec-
tor space summarizing what is normal in the scene
with only little supervision by the operator, who has
just to choose an appropriate learning sequence. The
algorithm works by projecting newly acquired images
onto the so-constructed normality space, in search for
innovation that is, in the case of a surveillance sys-
tems, the presence of events of some kind. Moreover,
we employ an object-oriented approach by analysing
regions of the image related to the characteristic size
of the event of interest instead of single pixels or local
textures as done in previous works.
From the preliminary results obtained so far using
indoor and outdoorsequences in operating conditions,
the results are quite promising, showing good robust-
ness and high performance.
REFERENCES
Clarkson, B. and Pentland, A. (2000). Framing through pe-
ripheral perception. In Proceedings of the IEEE inter-
national conference on image processing (ICIP 2000),
Vancouver, Canada, pages 38–41.
CogViSys (2007). http://cogvisys.iaks.uni-karlsruhe.de/.
[online].
Collins, R. T., Lipton, A. J., and Kanade, T. (2000). Special
issue on video surveillance. IEEE Trans. On Pattern
Analysis and Machine Intelligence, 22(8).
ECCV06 (2006). 6th ieee international workshop on visual
surveillance. In conjunction with the 9th European
Conference on Computer Vision 2006, Graz, Austria.
Friedman, N. and Russell, S. (1997). Image segmentation in
video sequences: A probabilistic approach. In Annual
Conference on Uncertainty in Artificial Intelligence,
volume 2, pages 175–181.
Golub, G. and VanLoan, C. (1996). Matrix Computations.
Johns Hopkins University Press, Baltimore.
Hu, W., Tan., T., Wang, L., and Maybank, S. (2004). A
survey on visual surveillance of object motion and be-
haviours. IEEE Trans. On Systems, Man and Cyber-
netics Part C, Applications and Reviews, 34(3):334–
352.
ICML06 (2006). Machine learning algorithms for surveil-
lance and event detection. In conjunction with the
International Conference on Machine Learning 2006,
Carnegie Mellon University, Pittsburgh, PA.
Jain, R., Militzer, D., and Nagel, H. (1977). Separating non-
stationary from stationary scene components in a se-
quence of real world tv-images. In International Joint
Conference on Artificial Intelligence, pages 612–618.
Medioni, G. G., Cohen, I., Bremond, F., Hongeng, S., and
Nevatia, R. (2001). Event detection and analysis from
video streams. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 23(8):873–889.
Monnet, A., Mittal, A., Paragios, N., and Ramesh, V.
(2003). Background modeling and subtraction of dy-
namic scenes.
Radke, R. J., Andra, S., Al-Kofahi, O., and Roysam, B.
(2005). Image change detection algorithms: A sys-
tematic survey. IEEE Transactions on Image Process-
ing, 14(3):294–307.
Regazzoni, C., Ramesh, V., and Foresti, G. L. (2001). Spe-
cial issue on video communication, processing and
understanding for third generation surveillance sys-
tems. Proceedings of the IEEE, 89(10).
Satoh, Y., Tanahashi, H., Wang, C., Kaneko, S., Niwa, Y.,
and Yamamoto, K. (2002). Robust event detection by
radial reach filter. In 16th ICPR, International Con-
ference on Pattern Recognition, volume 2, pages 623–
626.
VSAM (2007). http://www.cs.cmu.edu/ vsam/. [online].
Zelnik-Manor, L. and Irani, M. (2001). Event-based anal-
ysis of video. In Proceedings of the IEEE conference
on computer vision and pattern recognition (CVPR
2001), Kauai, Hawaii, December 2001, pages 123–
130.
VISAPP 2008 - International Conference on Computer Vision Theory and Applications
554