matching.
The presented experiments demonstrate that the
random forest algorithm has recognition performance
similar to that of open-end DTW at high computa-
tional speed. The matching speed is important not
only for intuitive interaction but also for the usability
of our system. Practically, the dataset of daily mo-
tion will be longer than 24 hours in some cases. The
random forest algorithm is suitable for our system de-
signed to find optimal gestures for certain applications
and situations quickly.
5 CONCLUSION AND FUTURE
WORK
For intuitive interaction with wearable devices, ges-
ture recognition has advantages over traditional meth-
ods such as gestures on a touch pad. In terms of rec-
ognizing gestures correctly for a smartwatch, the false
positiveness of gestures is a big problem.
We proposed a primitive-based gesture recogni-
tion approach to solve the problem. This approach
creates new gestures that are resistant against false
detection in daily motions. We assume one system
for LFP gesture creation. This system records daily
motion data from users and searches for LFP patterns
in the daily motions employing our proposed method.
The system searches for and visualizes LFP motion
gestures by focusing on primitive gestures.
In future work, we will continue to evaluate our
proposed method for multiple people and investigate
a way of visualizing primitive sequences though the
evaluation. In addition, we will verify the validity of
our method for seven primitive gestures.
REFERENCES
Akl, A., Feng, C., and Valaee, S. (2011). A
novel accelerometer-based gesture recognition sys-
tem. IEEE TRANSACTIONS ON SIGNAL PROCESS-
ING, 59(12):6197.
Ashbrook, D. and Starner, T. (2010). Magic: a motion
gesture design tool. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems,
pages 2159–2168. ACM.
Bauer, B. and Kraiss, K.-F. (2002). Video-based sign
recognition using self-organizing subunits. In Pattern
Recognition, 2002. Proceedings. 16th International
Conference on, volume 2, pages 434–437. IEEE.
Chen, Q., Georganas, N. D., and Petriu, E. M. (2007).
Real-time vision-based hand gesture recognition us-
ing haar-like features. In Instrumentation and Mea-
surement Technology Conference Proceedings, 2007.
IMTC 2007. IEEE, pages 1–6. IEEE.
Kohlsdorf, D. K. H. and Starner, T. E. (2013). Magic sum-
moning: towards automatic suggesting and testing of
gestures with low probability of false positives dur-
ing use. The Journal of Machine Learning Research,
14(1):209–242.
Liaw, A. and Wiener, M. (2002). Classification and regres-
sion by randomforest. R news, 2(3):18–22.
Liu, J., Zhong, L., Wickramasuriya, J., and Vasudevan, V.
(2009). uwave: Accelerometer-based personalized
gesture recognition and its applications. Pervasive and
Mobile Computing, 5(6):657–675.
Mitra, S. and Acharya, T. (2007). Gesture recognition:
A survey. Systems, Man, and Cybernetics, Part
C: Applications and Reviews, IEEE Transactions on,
37(3):311–324.
Mori, A., Uchida, S., Kurazume, R., Taniguchi, R.-i.,
Hasegawa, T., and Sakoe, H. (2006). Early recogni-
tion and prediction of gestures. In Pattern Recogni-
tion, 2006. ICPR 2006. 18th International Conference
on, volume 3, pages 560–563. IEEE.
Oka, R. (1998). Spotting method for classification of real
world data. The Computer Journal, 41(8):559–565.
Park, T., Lee, J., Hwang, I., Yoo, C., Nachman, L., and
Song, J. (2011). E-gesture: a collaborative archi-
tecture for energy-efficient gesture recognition with
hand-worn sensor and mobile devices. In Proceedings
of the 9th ACM Conference on Embedded Networked
Sensor Systems, pages 260–273. ACM.
Ruiz, J. and Li, Y. (2011). Doubleflip: a motion gesture
delimiter for mobile interaction. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems, pages 2717–2720. ACM.
Ruiz, J., Li, Y., and Lank, E. (2011). User-defined motion
gestures for mobile interaction. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems, pages 197–206. ACM.
Ruppert, G. C. S., Reis, L. O., Amorim, P. H. J., de Moraes,
T. F., and da Silva, J. V. L. (2012). Touchless ges-
ture user interface for interactive image visualiza-
tion in urological surgery. World journal of urology,
30(5):687–691.
Schl¨omer, T., Poppinga, B., Henze, N., and Boll, S. (2008).
Gesture recognition with a wii controller. In Proceed-
ings of the 2nd international conference on Tangible
and embedded interaction, pages 11–14. ACM.
Zafrulla, Z., Brashear, H., Starner, T., Hamilton, H., and
Presti, P. (2011). American sign language recogni-
tion with the kinect. In Proceedings of the 13th inter-
national conference on multimodal interfaces, pages
279–286. ACM.
Zhang, M. and Sawchuk, A. A. (2012). Motion primitive-
based human activity recognition using a bag-of-
features approach. In Proceedings of the 2nd ACM
SIGHIT International Health Informatics Symposium,
pages 631–640. ACM.