Table 1: F1 evaluation results (%) for different lighting con-
ditions and all feature calibration methods.
Artificial Natural Mixed
No color constancy 77 82 72
Gray world
78 83 71
White Patch
84 82 77
Modified White Patch
85 85 80
gories (natural and artificial lighting). The evaluation
has been based on these two categories, as long as
their “mixed” condition: the latter is the general (and
harder) case of detecting changes under all possible
illumination conditions. The results of this process
are shown in Table 1. Due to space limitations, we do
not present the performance results for the upper and
lower clothings but only their averages. However, we
would like to report that, in average, the problem of
detecting changes on the lower clothes is at least 10%
harder, in terms of F1 measure. This is probably due
to the fact that the lower body part is usually not en-
tirely visible, in the context of a real home environ-
ment, since there are usually pieces of furniture and
other objects intervening between the sensor and the
human.
4 CONCLUSIONS
We have presented a Kinect-based approach to detect-
ing changes in users’ clothes in a smart home envi-
ronment in the context of measuring the functional
status of the elderly. The whole system has been im-
plemented in the Processing programming language,
using the OpenNI SDK and achieves real-time detec-
tion. In order to evaluate the proposed approach, a
dataset of recordings under various illumination con-
ditions has been compiled, which is also publicly
available. Experimental results have indicated that
the overall change detection method achieves up to
80% performance for mixed lighting conditions and
85 for single conditions, that is 8% compared to the
performance when the initial feature representation is
adopted. In addition, the adopted color constancy ap-
proach abridges the gap at the performance between
different illumination conditions. In the context of the
carried out ongoing work we focus on the following
directions: (a) implementation of more advanced im-
age features (e.g. HOGs) (b) evaluation of more so-
phisticated color constancy techniques and (c) exten-
sion of the benchmark with more users and clothes
combinations.
ACKNOWLEDGEMENTS
The research leading to these results has re-
ceived funding from the European Union’s Seventh
Framework Programme (FP7/2007-2013) under grant
agreement no 288532. For more details, please see
http://www.usefil.eu.
REFERENCES
(2011). Microsoft kinect sensor. Online available:
http://www.microsoft.com/en-us/kinectforwindows/.
Accessed April 1, 2013.
Bossard, L., Dantone, M., Leistner, C., Wengert, C., Quack,
T., and Gool, L. V. (2013). Apparel classification with
style. In Computer Vision–ACCV 2012, pages 321–
335. Springer.
Chen, H., Gallagher, A., and Girod, B. (2012). Describing
clothing by semantic attributes. In Computer Vision–
ECCV 2012, pages 609–623. Springer.
Collin, C. and Wade, D. (1988). The barthel adl index: a
standard measure of physical disability? Disability &
Rehabilitation, 10(2):64–67.
Collin, C., Wade, D., Davies, S., and Horne, V. (1988). The
barthel adl index: a reliability study. Disability & Re-
habilitation, 10(2):61–63.
Fleury, A., Vacher, M., and Noury, N. (2010). Svm-based
multimodal classification of activities of daily liv-
ing in health smart homes: sensors, algorithms, and
first experimental results. Information Technology in
Biomedicine, IEEE Transactions on, 14(2):274–283.
Funt, B., Cardei, V., and Barnard, K. (1996). Learning color
constancy. In IS&T/SID Fourth Color Imaging Con-
ference, pages 58–60.
Kalantidis, Y., Kennedy, L., and Li, L.-J. (2013). Getting
the look: clothing recognition and segmentation for
automatic product suggestions in everyday photos. In
Proceedings of the 3rd conference on International
conference on multimedia retrieval, pages 105–112.
ACM.
Liu, S., Feng, J., Song, Z., Zhang, T., Lu, H., Xu, C., and
Yan, S. (2012a). Hi, magic closet, tell me what to
wear! In Proceedings of the 20th international con-
ference on Multimedia, pages 619–628. ACM.
Liu, S., Song, Z., Liu, G., Xu, C., Lu, H., and Yan, S.
(2012b). Street-to-shop: Cross-scenario clothing re-
trieval via parts alignment and auxiliary set. In Com-
puter Vision and Pattern Recognition (CVPR), 2012
IEEE Conference on, pages 3330–3337. IEEE.
Maitin-Shepard, J., Cusumano-Towner, M., Lei, J., and
Abbeel, P. (2010). Cloth grasp point detection based
on multiple-view geometric cues with application to
robotic towel folding. In Robotics and Automa-
tion (ICRA), 2010 IEEE International Conference on,
pages 2308–2315. IEEE.
Ramisa, A., Alenya, G., Moreno-Noguer, F., and Torras, C.
(2012). Using depth and appearance features for in-
formed robot grasping of highly wrinkled clothes. In
SIGMAP2014-InternationalConferenceonSignalProcessingandMultimediaApplications
88