5 CONCLUSIONS
In this paper we have presented an adaptation of the
Particle Video algorithm for crowd flow tracking. The
goal was to detect where the crowd is entering the
scene that is monitored and where it exits this scene.
It is of a particular interest for any crowd monitoring
system to be able to track the different crowd flows in
order to be able to adapt the environment as efficiently
as possible to the different streams of pedestrians and
their strength. We showed that our algorithm can de-
tect the different entry and exit areas of the crowd in
the image and that it can also provide the route of the
crowd within the image with an indication of the rate
of pedestrians coming from one area and going to an-
other. Morevover, our GPU-implementation shows
that this kind of algorithm reaches real-time execu-
tion even though not fully optimized. This depends,
of course, on the number of particles that are set and
also on the hardware used. In our case, tests were
run on a machine equipped with an Intel Core i7 @
3.20GHz CPU and a nVidia GeForce GTX 580 GPU.
About 10
5
particles were deployed.
To conclude, we would like to point out some fur-
ther directions of research that could be done in order
to enhance such a system. First, on the algorithm it-
self, the condition of abnormality could be improved.
For the moment, they are just based on physical prop-
erties linked to the pedestrians’ accelerations.
Then, it is obvious that some additional functions
could be added on top of those existing. The first that
can be thought of is the clustering or the classifica-
tion of behaviors. Grouping the particles according to
their behavior and being able to put a label on top of
these groups could help the human operator to ana-
lyze the scene he is monitoring.
Finally, as explained in Subsection 3.2, a cloud of
particles generated on a crowd with a number of parti-
cles per pedestrian λ can be interpreted as λ represen-
tatives observations of that monitored crowd. There-
fore, these λ observations could be used to train crowd
simulators specifically designed to reproduce the be-
havior of crowds at some location of interest mon-
itored by video-surveillance. The learning of these
specific behaviors would help to generate crowd mod-
els adapted to specific environments and help, once
again, a human operator to design some environmen-
tal response to events of interest.
REFERENCES
Ali, S. and Shah, M. (2007). A lagrangian particle dynam-
ics approach for crowd flow segmentation and stability
analysis. In IEEE International Conference on Com-
puter Vision and Pattern Recognition.
Allain, P., Courty, N., and Corpetti, T. (2012). AGORASET:
a dataset for crowd video analysis. In 1st ICPR Inter-
national Workshop on Pattern Recognition and Crowd
Analysis, Tsukuba, Japan.
Andrade, E. L., Blunsden, S., and Fisher, R. B. (2006).
Modelling crowd scenes for event detection. In Pro-
ceedings of the 18th International Conference on Pat-
tern Recognition - Volume 01, ICPR ’06, pages 175–
178.
Chau, D. P., Bremond, F., and Thonnat, M. (2013). Ob-
ject tracking in videos: Approaches and issues. arXiv
preprint arXiv:1304.5212.
Corpetti, T., Heitz, D., Arroyo, G., Memin, E., and Santa-
Cruz, A. (2006). Fluid experimental flow estimation
based on an optical-flow scheme. Experiments in flu-
ids, 40(1):80–97.
Farneb
¨
ack, G. (2003). Two-frame motion estimation based
on polynomial expansion. In Image Analysis, pages
363–370. Springer.
Helbing, D. and Moln
´
ar, P. (1995). Social force model for
pedestrian dynamics. Physical Review E, 51:4282.
Isard, M. and Blake, A. (1998). Condensationconditional
density propagation for visual tracking. International
journal of computer vision, 29(1):5–28.
Liu, T. and Shen, L. (2008). Fluid flow and optical flow.
Journal of Fluid Mechanics, 614(253):1.
Mehran, R., Morre, B. E., and Shah, M. (2010). A streakline
representation of flow in crowded scenes. In Proc. of
the 11th European Conference on Computer Vision.
Mehran, R., Omaya, A., and Shah, M. (2009). Abnormal
crowd behavior detection using social force model. In
Proc. of the IEEE International Conference on Com-
puter Vision and Pattern Recognition 2009.
Rodriguez, M., Sivic, J., and Laptev, I. (2012). Analysis
of crowded scenes in video. Intelligent Video Surveil-
lance Systems, pages 251–272.
Sand, P. and Teller, S. (2006). Particle video: Long-range
motion estimation using point trajectories. Computer
Vision and Pattern Recognition, 2:2195–2202.
Tan, D. and Chen, Z. (2012). On a general formula of fourth
order runge-kutta method. Journal of Mathematical
Science & Mathematics Education, 7.2:1–10.
Viola, P. and Jones, M. (2001). Rapid object detection using
a boosted cascade of simple features. In Computer Vi-
sion and Pattern Recognition, 2001. CVPR 2001. Pro-
ceedings of the 2001 IEEE Computer Society Confer-
ence on, volume 1, pages I–511. IEEE.
Yilmaz, A., Javed, O., and Shah, M. (2006). Object track-
ing: A survey. Acm Computing Surveys (CSUR),
38(4):13.
Zhou, H., Yuan, Y., and Shi, C. (2009). Object tracking
using sift features and mean shift. Computer Vision
and Image Understanding, 113(3):345–352.
ICPRAM2014-InternationalConferenceonPatternRecognitionApplicationsandMethods
452