REFERENCES
Blank, M., Gorelick, L., Shechtman, E., Irani, M., and
Basri, R. (2005). Actions as space-time shapes. In
ICCV, volume 2, pages 1395–1402. IEEE.
Conaire, C. O., O’Connor, N. E., and Smeaton, A. F.
(2007). Detector adaptation by maximising agreement
between independent data sources. In CVPR, pages 1–
6. IEEE.
Davison, A. C. and Smith, R. L. (1990). Models for ex-
ceedances over high thresholds. Journal of the Royal
Statistical Society. Series B (Methodological), pages
393–442.
Doll
´
ar, P., Rabaud, V., Cottrell, G., and Belongie, S. (2005).
Behavior recognition via sparse spatio-temporal fea-
tures. In VS-PETS, pages 65–72. IEEE.
Duan, K., Keerthi, S. S., Chu, W., Shevade, S. K., and Poo,
A. N. (2003). Multi-category classification by soft-
max combination of binary classifiers. In Multiple
Classifier Systems, pages 125–134. Springer.
Elgammal, A., Shet, V., Yacoob, Y., and Davis, L. S. (2003).
Learning dynamics for exemplar-based gesture recog-
nition. In CVPR, volume 1, pages I–571. IEEE.
Fathi, A. and Mori, G. (2008). Action recognition by learn-
ing mid-level motion features. In CVPR, pages 1–8.
IEEE.
Felzenszwalb, P. F. and Zabih, R. (2011). Dynamic pro-
gramming and graph algorithms in computer vision.
PAMI, 33(4):721–740.
Ferrari, V., Marin-Jimenez, M., and Zisserman, A. (2008).
Progressive search space reduction for human pose es-
timation. In CVPR, pages 1–8. IEEE.
Gong, W. et al. (2013). 3D Motion Data aided Human Ac-
tion Recognition and Pose Estimation. PhD thesis,
Universitat Aut
`
onoma de Barcelona.
Laptev, I. (2005). On space-time interest points. IJCV, 64(2-
3):107–123.
Li, C. and Yung, N. (2012). Arm pose modeling for visual
surveillance. In IPCV, pages 340–347.
Martin, D. R., Fowlkes, C. C., and Malik, J. (2004). Learn-
ing to detect natural image boundaries using local
brightness, color, and texture cues. PAMI, 26(5):530–
549.
Moeslund, T. B., Hilton, A., Kr
¨
uger, V., and Sigal, L.
(2011). Visual analysis of humans: looking at peo-
ple. Springer.
Natarajan, P. and Nevatia, R. (2012). Hierarchical multi-
channel hidden semi markov graphical models for ac-
tivity recognition. CVIU.
Niebles, J. C., Wang, H., and Fei-Fei, L. (2008). Unsu-
pervised learning of human action categories using
spatial-temporal words. IJCV, 79(3):299–318.
Rodriguez, M., Ahmed, J., and Shah, M. (2008). Action
mach a spatio-temporal maximum average correlation
height filter for action recognition. In CVPR, pages
1–8.
Sadanand, S. and Corso, J. J. (2012). Action bank: A high-
level representation of activity in video. In CVPR,
pages 1234–1241. IEEE.
Scarrott, C. and MacDonald, A. (2012). A review
of extreme value threshold es-timation and uncer-
tainty quantification. REVSTAT–Statistical Journal,
10(1):33–60.
Schuldt, C., Laptev, I., and Caputo, B. (2004). Recogniz-
ing human actions: a local svm approach. In ICPR,
volume 3, pages 32–36. IEEE.
Sigal, L. and Black, M. J. (2006). Humaneva: Synchro-
nized video and motion capture dataset for evaluation
of articulated human motion. Brown Univertsity TR,
120.
Torralba, A., Murphy, K. P., and Freeman, W. T. (2004).
Sharing features: efficient boosting procedures for
multiclass object detection. In CVPR, volume 2, pages
II–762. IEEE.
Wang, L. and Yung, N. H. (2010). Extraction of mov-
ing objects from their background based on multi-
ple adaptive thresholds and boundary evaluation. ITS,
11(1):40–51.
Xu, R., Agarwal, P., Kumar, S., Krovi, V. N., and Corso, J. J.
(2012). Combining skeletal pose with local motion for
human activity recognition. In Articulated Motion and
Deformable Objects, pages 114–123. Springer.
Yamato, J., Ohya, J., and Ishii, K. (1992). Recognizing
human action in time-sequential images using hidden
markov model. In CVPR, pages 379–385. IEEE.
Yang, Y. and Ramanan, D. (2011). Articulated pose estima-
tion with flexible mixtures-of-parts. In CVPR, pages
1385–1392. IEEE.
Yao, A., Gall, J., Fanelli, G., and Van Gool, L. (2011). Does
human action recognition benefit from pose estima-
tion?”. In BMVC, pages 67.1–67.11.
ActionCategorizationbasedonArmPoseModeling
47