of SVM with other three algorithms like MLP algo-
rithm, IBK approach and J48 method that achieved
an accuracy equals to 90.37%, 88.72% and 89.33%
respectively. Our approach outperform all the previ-
ous work with an accuracy reaches 96.3%.
Regarding the REALDISP dataset, authors
in (Banos et al., 2014) tested the activities after
extracting the best features on the decision trees,
K-Nearest Neighbor, Nave Bayes algorithms. The
accuracies were 90%, 96%, and 72% respectively for
the ideal-location setting data. It was 78%, 89%, 65%
for the self-placement data. The difference between
the ideal-location setting and the self placement data
are illustrated in section 2.2. Our approach achieves
94.5% for the the ideal-placement data.
5 CONCLUSION AND FUTURE
WORK
This paper introduced the HAD-AW dataset; which
includes 31 human activities; as a reference source for
human activity recognition research by using a smart
watch. Additionally, we presented a human motion
recognition framework based on tri-axial sensory data
of IMU sensors. The framework exploits a feature re-
duction as a preprocessing step where the raw signals
are parameterized by a combination of some statisti-
cal and physical features.
The experimental results indicated that the recog-
nition accuracy reaches 95.3% for HAD-AW dataset,
96.3% for USC-HAD dataset (Zhang and Sawchuk,
2012), 84.44% for CMU-MMAC dataset (De la
Torre et al., 2008), 94.5% for the REALDISP
dataset (Ba
˜
nos et al., 2012), and 93.2% for the dataset
in (Gomaa et al., 2017). Moreover, it saved the
executing time of training and testing by 88% to
98% compared to the RF-based method for different
datasets. It is worth mentioning that when we used
the proposed approach to test the combination of all
31 activities of HAD-AW dataset and the 14 activities
of (Gomaa et al., 2017) , the total accuracy reaches
90.2% for the whole 45 combined activities.
In the future, We aim to increase the number of
activities and collect new dataset using Myo device
which contains data from IMU sensors, electromyo-
graphic (EMG) sensors and magnetometer. We will
compare the recognition accuracy between both Ap-
ple watch and Myo device by using different algo-
rithms. We also plan to collect another dataset of daily
human activities in a full continuous stream scenarios
and developing approaches for recognizing them.
REFERENCES
Ba
˜
nos, O., Damas, M., Pomares, H., Rojas, I., T
´
oth, M. A.,
and Amft, O. (2012). A benchmark dataset to eval-
uate sensor displacement in activity recognition. In
Proceedings of the 2012 ACM Conference on Ubiqui-
tous Computing, pages 1026–1035. ACM.
Banos, O., Toth, M. A., Damas, M., Pomares, H., and Ro-
jas, I. (2014). Dealing with the effects of sensor dis-
placement in wearable activity recognition. Sensors,
14(6):9995–10023.
Bruno, B., Mastrogiovanni, F., Sgorbissa, A., Vernazza,
T., and Zaccaria, R. (2012). Human motion mod-
elling and recognition: A computational approach. In
Automation Science and Engineering (CASE), 2012
IEEE International Conference on, pages 156–161.
IEEE.
Bruno, B., Mastrogiovanni, F., Sgorbissa, A., Vernazza, T.,
and Zaccaria, R. (2013). Analysis of human behavior
recognition algorithms based on acceleration data. In
Robotics and Automation (ICRA), 2013 IEEE Interna-
tional Conference on, pages 1602–1607. IEEE.
De la Torre, F., Hodgins, J., Bargteil, A., Martin, X., Macey,
J., Collado, A., and Beltran, P. (2008). Guide to the
carnegie mellon university multimodal activity (cmu-
mmac) database. Robotics Institute, page 135.
Donahue, J., Anne Hendricks, L., Guadarrama, S.,
Rohrbach, M., Venugopalan, S., Saenko, K., and Dar-
rell, T. (2015). Long-term recurrent convolutional net-
works for visual recognition and description. In Pro-
ceedings of the IEEE conference on computer vision
and pattern recognition, pages 2625–2634.
Gomaa, W., Elbasiony, R., and Ashry, S. (2017). Adl clas-
sification based on autocorrelation function of iner-
tial signals. In Machine Learning and Applications
(ICMLA), 2017 16th IEEE International Conference
on, pages 833–837. IEEE.
Politi, O., Mporas, I., and Megalooikonomou, V. (2014).
Human motion detection in daily activity tasks using
wearable sensors. In Signal Processing Conference
(EUSIPCO), 2014 Proceedings of the 22nd European,
pages 2315–2319. IEEE.
Spriggs, E. H., De La Torre, F., and Hebert, M. (2009).
Temporal segmentation and activity classification
from first-person sensing. In Computer Vision and
Pattern Recognition Workshops, 2009. CVPR Work-
shops 2009. IEEE Computer Society Conference On,
pages 17–24. IEEE.
Zhang, G. and Piccardi, M. (2015). Structural svm with
partial ranking for activity segmentation and classifi-
cation. IEEE Signal Processing Letters, 22(12):2344–
2348.
Zhang, M. and Sawchuk, A. A. (2012). Usc-had: a daily ac-
tivity dataset for ubiquitous activity recognition using
wearable sensors. In Proceedings of the 2012 ACM
Conference on Ubiquitous Computing, pages 1036–
1043. ACM.
An LSTM-based Descriptor for Human Activities Recognition using IMU Sensors
501