Hyper-Threading) notebook, powered by Windows.
For ceiling-mounted Kinect at the height of 2.6 m
from the floor the covered area is about 5.5 m
2
. With
Nyko lens the area covered by the camera is about
15.2 m
2
. The most computationally demanding op-
eration is extraction of the depth reference image of
the scene. For images of size 640× 480 the computa-
tion time needed for extraction of the depth reference
image is about 9 milliseconds.
5 CONCLUSIONS
In this work we demonstrated an approach for fall de-
tection using ceiling-mounted Kinect. The lying pose
is separated from common daily activities by a clas-
sifier, trained on features expressing head-floor dis-
tance, person area and shape’s major length to width.
To distinguish between intentional lying postures and
accidental falls the algorithm employs also motion be-
tween static postures. The experimental validation of
the algorithm that was conducted on realistic depth
image sequences of daily activities and simulated falls
shows that the algorithm allows reliable fall detection
with low false positives ratio. On more than 45000
depth images the algorithm gave 0% error. To reduce
the processing overload an accelerometer was used to
indicate the potential impact of the person and to start
analysis of depth images. The use of accelerometer
as indicator of potential fall simplifies computation of
the motion feature and increases its reliability. Ow-
ing the use only depth images the system preserves
privacy of the user and works in poor lighting condi-
tions.
ACKNOWLEDGEMENTS
This work has been supported by the National Science
Centre (NCN) within the project N N516 483240.
REFERENCES
Aghajan, H., Wu, C., and Kleihorst, R. (2008). Distributed
vision networks for human pose analysis. In Mandic,
D., Golz, M., Kuh, A., Obradovic, D., and Tanaka, T.,
editors, Signal Processing Techniques for Knowledge
Extraction and Information Fusion, pages 181–200.
Springer US.
Bourke, A., O’Brien, J., and Lyons, G. (2007). Evaluation
of a threshold-based tri-axial accelerometer fall detec-
tion algorithm. Gait & Posture, 26(2):194–199.
Chen, J., Kwong, K., Chang, D., Luk, J., and Bajcsy, R.
(2005). Wearable sensors for reliable fall detection. In
Proc. of IEEE Int. Conf. on Engineering in Medicine
and Biology Society (EMBS), pages 3551–3554.
Cover, T. M. and Thomas, J. A. (1992). Elements of Infor-
mation Theory. Wiley.
Cover, T. M. and Thomas, J. A. (2005). Data Mining: Prac-
tical machine learning tools and techniques. Morgan
Kaufmann, San Francisco, 2nd edition.
Horn, B. (1986). Robot Vision. The MIT Press, Cambridge,
MA.
Jansen, B. and Deklerck, R. (2006). Context aware inactiv-
ity recognition for visual fall detection. In Proc. IEEE
Pervasive Health Conf. and Workshops, pages 1–4.
Kepski, M. and Kwolek, B. (2012). Fall detection on em-
bedded platform using Kinect and wirelessaccelerom-
eter. In 13th Int. Conf. on Computers Helping People
with Special Needs, LNCS, vol. 7383, pages II:407–
414. Springer-Verlag.
Kepski, M. and Kwolek, B. (2013). Unobtrusive fall detec-
tion at home using kinect sensor. In Computer Anal-
ysis of Images and Patterns, volume 8047 of LNCS,
pages I:457–464. Springer Berlin Heidelberg.
Marshall, S. W., Runyan, C. W., Yang, J., Coyne-Beasley,
T., Waller, A. E., Johnson, R. M., and Perkis, D.
(2005). Prevalence of selected risk and protective fac-
tors for falls in the home. American J. of Preventive
Medicine, 8(1):95–101.
Mastorakis, G. and Makris, D.(2012). Fall detection system
using Kinect’s infrared sensor. J. of Real-Time Image
Processing, pages 1–12.
Miaou, S.-G., Sung, P.-H., and Huang, C.-Y. (2006). A
customized human fall detection system using omni-
camera images and personal information. Distributed
Diagnosis and Home Healthcare, pages 39–42.
Mubashir, M., Shao, L., and Seed, L. (2013). A survey on
fall detection: Principles and approaches. Neurocom-
puting, 100:144 – 152. Special issue: Behaviours in
video.
Noury, N., Fleury, A., Rumeau, P., Bourke, A.,
´
OLaighin,
G., Rialle, V., and Lundy, J. (2007). Fall detection -
principles and methods. In Int. Conf. of the IEEE Eng.
in Medicine and Biology Society, pages 1663–1666.
Noury, N., Rumeau, P., Bourke, A.,
´
OLaighin, G., and
Lundy, J. (2008). A proposal for the classification and
evaluation of fall detectors. IRBM, 29(6):340 – 349.
Pantic, M., Pentland, A., Nijholt, A., and Huang, T. (2006).
Human computing and machine understanding of hu-
man behavior: a survey. In Proc. of the 8th Int. Conf.
on Multimodal Interfaces, pages 239–248.
Rougier, C., Meunier, J., St-Arnaud, A., and Rousseau, J.
(2006). Monocular 3D head tracking to detect falls of
elderly people. In Int. Conf. of the IEEE Engineering
in Medicine and Biology Society, pages 6384–6387.
Weinland, D., Ronfard, R., and Boyer, E. (2011). A sur-
vey of vision-based methods for action representation,
segmentation and recognition. Comput. Vis. Image
Underst., 115:224–241.
Williams, G., Doughty, K., Cameron, K., and Bradley, D.
(1998). A smart fall and activity monitor for telecare
applications. In IEEE Int. Conf. on Engineering in
Medicine and Biology Society, pages 1151–1154.
FallDetectionusingCeiling-mounted3DDepthCamera
647