tivities to related components of smart-city scheme.
PHASOR is created from three components: ID,
Model, and Sensor. The first component identifies
which sensor is used to monitor a specific object like
a group of users, individual user, or a type of a smart-
phone. The second component contains the general
model and the individual model that aim to generate
suitable classifiers for HAR. The last component in-
cludes two types of “sensors”: (1) physical sensors
that utilizes embedded sensors in smart-phones to rec-
ognize human activities, (2) and human factors that
use human interaction to personally increase the ac-
curacy of activities detection. The advantage of PHA-
SOR is the accuracy will be increased during the run-
time. Therefore it suits for lifelogging applications,
which analyze and give insights from captured data
from wearable devices, in the domain of smart-city.
The major contributions of this work are: 1. En-
hance Human Factors: as discussed in (Sowe and
Zettsu, 2015)(Sowe et al., 2016), human factor can
contribute to the success of IoE. Unfortunately, it is
difficult to know how a human entity interacts with
IoE. This work can model human’s involvement (i.e.,
passive and active roles) in IoE to enhance the ac-
curacy of HAR. The users can flexibly change their
role from passive (i.e., users’ activities are recorded
by smart-phones), to active (i.e., users correct the rec-
ognized results). 2. Adapting: using users’ feed-
back to increase individual human activity recogni-
tion, bringing the ability to be adapted to specific
users. 3. Global Working Scope: less lead time to
detect human activities of a new user at the beginning
of lifelog monitoring process with an acceptable ac-
curacy of HAR detection by taking into account the
common information sharing among a group of peo-
ple.
2 RELATED WORK
In general, most smart-phone based HAR systems
are built with three major components: sensory data
acquisition, model training, and activity recognition
(Capela et al., 2016). The first component utilizes
accelerometer, gyroscope, and barometer sensors to
gather data from human activities. These sensors can
be used alone (Siirtola and Roning, 2012)(Bayat et al.,
2014), or combined together (Shoaib, 2013)(Chetty et
al., 2015)(Capela et al., 2016). The second compo-
nent is built by using different classification methods
such as Support Vector Machine (SVM), k-Nearest
Neighbour (k-NN/IBk), or others (Lara and Labrador,
2013)(Shoaib et al., 2015). The last component uses
these trained models to classify data gathered from
the first component to recognize human activities.
In earlier proposed methods, e.g., (Siirtola and
Roning, 2012) and (Bayat et al., 2014), only ac-
celerometer information was exploited. In (Siirtola
and Roning, 2012), the authors used two classifiers,
namely quadratic discriminant analysis and k-NN, to
recognize human activities. The main contribution of
this work is how to deploy the components on the
smartphone and server, so that the system can work
optimally. However, their method requires the phone
to be in a fixed position, e.g., in trousers front pockets
which limits their application range. In (Bayat et al.,
2014) the authors used several classifiers and in order
to overcome the difficulty of the phone position, they
introduced a strategy to select a suitable classifier for
recognizing some activities depending on the kind of
activity and the position of the smartphone. In (Miao
et al., 2015), the authors also discussed the impact of
varying positions and orientations of smartphones on
the qualification of HAR. They overcame this prob-
lem by developing an orientation-independent fea-
tures so that the system can work with acceptable ac-
curacy at any pockets. In (Chetty et al., 2015), the au-
thors exploited information not only from accelerom-
eter but also from gyroscope sensors to build classi-
fiers. Data mining approaches were utilized to build
classifiers with an information theory based rank-
ing of features as the pre-processing step. Recently,
Capela et al. in (Capela et al., 2016) proposed a new
method that can take into account different types of
users who have differences in walking biomechanics.
This system is considered as more affordable-price
and convenient solution than using wearable sensors.
The proposed system extracted 5 features from ac-
celerometer and gyroscope data and built classifiers
using decision tree. These activities are tested on both
able-bodied and stroke participants whom have differ-
ent treatment policies from medical perspective. Ac-
cording to the experimental results, the hypothesis of
differences in walking biomechanics influences on the
identification of human activities is confirmed.
In (Vavoulas et al., 2016; Ojetola et al., 2015), the
authors discussed the insufficient and non-standard of
training data for human activities recognition and in-
troduced their shared database collected from volun-
teers with a set of basic features and baseline methods
for further comparison with other methods. The vari-
ety of users and positions of smartphones were also
considered in these studies.
In our study, we proposed a method that not only
improve the accuracy, but also taking into account the
human factors impact. We also exploit the data col-
lected in (Vavoulas et al., 2016) and compare their
approach with the proposed method.
ICPRAM 2017 - 6th International Conference on Pattern Recognition Applications and Methods
762