similarity and statistics about walking cycle, authors
ensure 5% of identification error rate. Recently, some
efforts in user verification have been done also in (De-
rawi et al., 2010), where, using the accelerometer of
an Android-based mobile phone, walking data have
been collected from 51 testers walking in an interior
corridor. From the best of our knowledge, that work
represents the first one using data collected using a
real mobile phone.
In our work, first encouraging results in user ver-
ification by walking activity are reported. In our lab,
a wearable system easy to use and comfortable to
bring has been developed. Motion, audio and pho-
tometric data of five basic every-day life activities
have been collected from ten volunteer testers. The
activities performed are walking, climbing up/down
stairs, staying standing, talking with people and work-
ing at computer. Testers performed activities where
they want and for the time they want. Developing a
custom wearable system allows simulating the use of
wearable devices people use everyday, such as mobile
phones, but with the complete freedom to customize
each software level, from the operating system to the
application level. We propose a discriminative ma-
chine learning pipeline for user verification. Discrim-
inant classifiers have proven to be extremely efficient
and powerful tools, even surpassing the performances
of generative machine learning techniques. Using this
framework, a two stage process is defined. In the first
stage, a general walking classifier is trained using a
baseline training set using en ensemble strategy based
on AdaBoost (Freund and Schapire, 1999). The clas-
sifier is subsequently personalized adding data of ver-
ified users in order to boost the performances of the
walking activity for those users. Since AdaBoost is
an incremental classifier, this process is extremely ef-
ficient since it just needs to add further weak classi-
fiers to the original baseline classifier. Once the walk-
ing activity is detected for the specific user, we must
verify if it is an authorized user. From the discrim-
inative point of view, user modeling without counter
examples can be done using One-Class classification
strategy (Tax, 2001). In One-Class classification, the
boundary of a given dataset is found and the confi-
dence that data belong to that set depends on the dis-
tance to the boundary. Thus, in the second stage, a
One-Class ensemble is created using as base classi-
fier a convex-hull on a reduced feature space. In this
work, we shown that this novel technique performs
well from both classification accuracy and computa-
tional cost point of view. Results obtained prove that
users can be verified with high confidence, with very
low false positive and false negative rates. The lay-
out of this paper is as follows. In the next section,
we describe the wearable device developed and the
data acquisition process. In Section 3, we describe
the features extraction process and in Section 4, the
classifiers used for the classifications. In Section 5
we show results obtained and finally, in Section 6, we
discuss results and conclude.
2 THE WEARABLE DEVICE
In this section, the wearable device and the data ac-
quisition process are described. The wearable sys-
tem, called BeaStreamer, is built around the Beagle
Board (TI, 2008). The device has small form factor
and it is comfortable to wear. Using BeaStreamer,
data have been collected from ten testers performing
five activities.
2.1 BeaStreamer
BeaStreamer is a wearable system designed for multi-
sensors data acquisition and processing. The system
acquires audio, video and motion data. The system
can be easily worn in one hand or in a little bag around
the waist. The audio and video data flows are acquired
using a standard low-cost web cam. Motion data
are acquired using a Bluetooth tri-axial accelerom-
eter. The core of the system is the Beagle Board,
a low-power, low-cost single-board computer built
around the OMAP3530 system-on-chip. OMAP3530
includes an ARM Cortex-A8 CPU at 600 MHz, a
TMS320C64x+ DSP for accelerated video and au-
dio codecs, and an Imagination Technologies Pow-
erVR SGX530 GPU to provide accelerated 2D and
3D rendering that supports OpenGL ES 2.0. DC sup-
ply must be a regulated 5 Volts. The board uses 2
Watts of power. An AKAI external USB battery at
1700mAh allows approximately 3 hours of autonomy
for the system in complete functionality. A Linux
Embedded operating system has been compiled ad-
hoc for the system and standard software interfaces
such as Video4Linux2 and BlueZ can be used for data
acquisition. It is possible to connect directly a moni-
tor and a keyboard to the board, using the board as a
standard personal computer. It is also possible enter
into the system by a serial terminal. The GStreamer
framework has been used for acquiring audio video
and Bluetooth motion data allowing to easily manage
synchronization issue in the data acquisition process.
The board can be programmed in C or Python.
PECCS 2011 - International Conference on Pervasive and Embedded Computing and Communication Systems
180