Recognition in Assistive Environments for
financially supporting this work.
REFERENCES
Willems J., Debard G., Bonroy B., Vanrumste B. and
Goedemé T., “How to detect human fall in video? An
overview”, In Proceedings of the positioning and
contex-awareness international conference (Antwerp,
Belgium, 28 May, 2009), POCA '09.
Cucchiara, R., Grana, C., Piccardi, M., and Prati A. 2003.
Detecting moving objects, ghosts, and shadows in
video streams. IEEE Transactions on Pattern Analysis
and Machine Intelligence 25, 10, (2003), 1337-1442.
McFarlane, N. and Schofield C., “Segmentation and
tracking of piglets in images”, MACH VISION APPL.
8, 3, (May. 1995), 187-193.
Wren, C., Azarhayejani, A., Darrell, T., and Pentland, A.
P. 1997. Pfinder: real-time tracking of the human
body, IEEE Transactions on Pattern Analysis and
Machine Intelligence 19, 7, (October. 1997), 780-785.
Stauffer C., and Grimson W., “Adaptive background
mixture models for real-time tracking”. In
Proceedings of the conference on computer vision and
pattern recognition (Ft. Collins, USA, June 23-25,
1999), CVPR '99. IEEE Computer Society, New York,
NY, pp. 246-252.
Cheng F. C., Huang S. C. and Ruan S. J. 2011,
Implementation of Illumination-Sensitive Background
Modeling Approach for Accurate Moving Object
Detection, IEEE Trans. on Boardcasting, vol. 57, no.
4, pp.794-801, 2011.
Christodoulidis A., Delibasis K., Maglogiannis I., “Near
real-time human silhouette and movement detection in
indoor environments using fixed cameras”, in The 5th
ACM International Conference on PErvasive
Technologies Related to Assistive Environments,
Heraklion, Crete, Greece, 2012.
Delamarre, Q., Faugeras, O., “3D articulated models
andmultiview tracking with physical forces”,
Computer Vision and ImageUnderstanding (CVIU) 81
(3) (2001) 328–357.
Kehl, R., Van Gool, L., “Markerless tracking of complex
human motions from multiple views”, Computer
Vision and Image Understanding (CVIU) 104 (2–3)
(2006) 190–209.
Barron, C., Kakadiaris, I., “Estimating anthropometryand
pose from a single uncalibrated image”, Computer
Vision andImage Understanding (CVIU) 81 (3) (2001)
269–284.
Bregler, C., Malik, J., Pullen, K., “Twist basedacquisition
and tracking of animal and human kinematics”,
International Journal of Computer Vision 56 (3)
(2004) 179–194.
Taylor, C., Reconstruction of articulated objects from
point correspondences in a single uncalibrated image,
Computer Vision and Image Understanding (CVIU)
80 (3) (2000) 349–363.
Liebowitz, D., Carlsson, S., “Uncalibrated motion
captureexploiting articulated structure constraints”,
International Journal of Computer Vision 51 (3)
(2003) 171–187.
Poppe, R., “Vision-based human motion analysis: An
overview”, Computer Vision and Image
Understanding, 108 (2007) 4–18.
Kemmotsu, K., Tomonaka, T., Shiotani, S., Koketsu, Y.,
and Iehara, M., "Recognizing human behaviors with
vision sensors in a Network Robot System," IEEE Int.
Conf on Robotics and Automation, pp.l274-1279,
2006.
Zhou, Z., Chen, X., Chung, Y., He, Z., Han, T. X. and
Keller, J., "Activity Analysis, Summarization and
Visualization for Indoor Human Activity Monitoring,"
IEEE Trans. on Circuit and systems for Video
Technology, Vol. 18, No. II, pp. 1489-1498,2008.
M. Saito and K. Kitaguchi, G. Kimura and M.
Hashimoto,“Human Detection from Fish-eye Image by
Bayesian Combination of Probabilistic Appearance
Models”, IEEE International Conference on Systems
Man and Cybernetics (SMC), 2010, pp243-248.
Li H. and Hartley R., “Plane-Based Calibration and Auto-
calibration of a Fish-Eye” Camera, P.J. Narayanan et
al. (Eds.): ACCV 2006, LNCS 3851, pp. 21–30, 2006,
c Springer-Verlag Berlin Heidelberg 2006.
Basu A., Licardie S., “Modeling fish-eye lenses”,
Proceedings of the 1993 IEEWSJ International
Conference on Intelligent Robots and Systems
Yokohama, Japan July 2630,1993.
Shah S. and Aggarwal J., “Intrinsic parameter calibration
procedure for a high distortion fish-eye lens camera
with distortion model and accuracy estimation”,
Pattern Recognition 29(11), 1775- 1788, 1996.
Delibasis K. K., Goudas T., Plagianakos V. P. and
Maglogiannis I., Fisheye Camera Modeling for
Human Segmentation Refinement in Indoor Videos, in
The 6th ACM International Conference on PErvasive
Technologies Related to Assistive Environments,
PETRA 2013.
Max, N., “Computer Graphics Distortion for IMAX and
OMNIMAX Projection”, Proc Nicograph 83, Dec
1983 pp 137.
Greene, N., “Environment Mapping and Other
Applications of World Projections”, IEEE Computer
Graphics and Applications, November 1986, vol.
6(11), pp 21.
http://paulbourke.net/dome/fisheye/
Micusik, B. and Pajdla, T., “Structure from Motion with
Wide Circular Field of View Cameras”, IEEE
Transactions on Pattern Analysis and Machine
Intelligence, PAMI 28(7), 2006, pp. 1-15.
http://www.3dmodelfree.com/models/20966-0.htm.
Goldberg D., “Genetic Algorithms in Search,
Optimization, and Machine Learning”, Addison
Wesley, 1989.
PoseRecognitioninIndoorEnvironmentsusingaFisheyeCameraandaParametricHumanModel
477