MODEL-BASED FEATURE EXTRACTION FOR ASSESSMENT OF
DRIVER-RELATED FATIGUE
Dami
´
an Alvarez,
´
Alvaro Orozco
Grupo de Investigaci
´
on en Control e Instrumentaci
´
on, Universidad Tecnol
´
ogica de Pereira, La Julita, Pereira, Colombia
Augusto Salazar
Grupo de Investigaci
´
on en Percepci
´
on y Control Inteligente, Universidad Nacional Sede Manizales
La Nubia, Manizales, Colombia
Keywords:
Driver fatigue.
Abstract:
One of the most major causes of road crashes is the fatigue. In this paper it is shown a methodology for Driver
Fatigue assessment based on computer vision (CV). CV is used to characterize different visual responses of
the driver while driving and suffering from fatigue. Some of the visual responses are the eyelid and lips
movements. The proposed Methodology uses an active appereance model (AAM) to adjust the facial model
Candide3 from images sequences where spatial measures can be computed. These measures include the eye
closeness and the mouth openness. Results show that with the measures computed it’s possible efficiently
extract some discriminant parameters related to driver fatigue state. For example, the PERCLOS, the AECS,
and the YawnFrec. Finally, an experimental framework is designed in order to compare the performance of
the proposed method with psychological signal-based methods.
1 INTRODUCTION
Driving for long periods of time can decrease alert
and performance of the driver who could start to suf-
fer from fatigue (Ting et al., 2008). Fatigue and lack
of sleep have been identified as the most frequent
causes of traffic accidents in the road. For instance,
the driver who is under these conditions is risking
not only his/her own life but also other people lives.
In order to prevent these kind of accidents, in the
last decades a huge effort has been made to develop
of monitoring systems that detects the fatigue in the
driver and warned him/her by using different tech-
niques. However, an efficiently system for fatigue as-
sessment is still being an important issue to solve.
2 THEORETICAL BACKGROUND
Fatigue. Is the state of alteration in both the aware-
ness level and the perception level of the person, this
state affects psychomotor processes, such as speed of
reaction, attention level, and making decisions that
are crucial for the safe development of an activity. Fa-
tigue is not the same as sleep, but induction of sleep
could occur with fatigue. Fatigue can be caused by
physical effort, emotional stress, lack of sleep, or an
unspecified disorder.
Driver Fatigue. Is a state of reduced mental alert-
ness, which impairs performance of a range of cogni-
tive and psychomotor tasks, including driving (Saroj
and Lal, 2001). The driver fatigue can be sub-divided
into two groups: related to sleep and related to tasks.
The first one, is caused by lack of sleep (this is the
category select in this work), while the second one is
caused by distracting tasks while driving.
Candide Model. It is a parameterized facial mask,
that has been specifically developed for model-based
coding of human faces. Different versions of this
mask has been developed, the most current version
is the third, which has been implemented mainly to
simplify the animation of MPEG-4 facial animation
parameters (Ahlberg, 2001).
2.1 Psychological Test for Attention
Measurement
Works that deal with the assessment of driver-related
fatigue require experimental frameworks where the
methodology can be contrasted and validated. For ex-
295
Alvarez D., Orozco
´
A. and Salazar A. (2010).
MODEL-BASED FEATURE EXTRACTION FOR ASSESSMENT OF DRIVER-RELATED FATIGUE.
In Proceedings of the 7th International Conference on Informatics in Control, Automation and Robotics, pages 295-298
Copyright
c
SciTePress
ample, attention test (such as interviews or computer
based test), which are basically monitoring task that
required psychomotor reactions. One test of attention
most commonly used in drivers is the test of atten-
tion variables TOVA. It’s considered the gold standard
within this kind of tests. It consists in 22-minutes of
psychomotor tasks where a subject needs to hold the
attention and respond to a random stimulus. Another
example, is the PVT test (psychomotor vigilance test),
that measures the visual reaction time (RT) by mean
of portable devices. This test is another common tool
of measuring the fatigue in performance and lack of
sleep studies. The PVT is a 10 minutes length test,
like the TOVA test requires responding to a stimulus
as fast as possible.
Anothers examples of tests that can be used in
these kind of studies are those tests that aim to evalu-
ate the ability to focus the attention. One example of
such test is the Stroop task. In general any test that
measures the RT is useful in these studies, because
the sleep restriction and deprivation can be the main
cause of an increase in the RT.
2.2 Classification of Driver Monitoring
Systems
In order to reduce road crashes, enormous efforts
have been done to develop driver monitoring systems.
These systems can be classify in three classes (Vural
et al., 2007). 1) Studies that analyze measures of the
driver’s performance. In this kind of systems the car
is equipped with measurement devices that indicate
if the person is driving as usual or not. These sys-
tems has the drawback that cannot be adapted to the
driver habits. 2) Work that are related to the measure-
ment of physiological signals. These methods pro-
vide good indicators of fatigue. However, these are
invasive methods that could interfere with the driving
task. 3) Works focus on detection of visual responses
that present the driver based on CV. The CV based
methods can be successful in order to assess fatigue
states. However, so far just one visual response has
been used and that could be the main drawback in the
CV based methods (Zhu et al., 2004).
Measurement of fatigue indicators is a significant
problem due to the absence of direct measures, which
are not directly relate to the fatigue but to the ef-
fects of fatigue. The only direct measure is the self
report. However there are problems related with its
use because the emotional influence in the person
(Wang et al., 2006). In the literature there are different
studies related with the measurement of performance,
physiological, of perception, among others. Although
the technologies applied to fatigue assessment have
evolve, the search for a fiable fatigue indicator still
ahead (Saroj and Lal, 2001).
2.3 Visual Responses and
Fatigue-Related Parameters
In order to detect driver fatigue it is required to mea-
sure different visual responses. Simultaneous mea-
sures will provide a less ambiguous scenary that just
one measure. Some of the visual responses are: the
eyelids, the head pose, and the facial expressions.
From these visual responses fatigue-related param-
eters are computed. For example, head-position-
dependent parameters and head-position-independent
parameters. the last will be used in this work, more
specifically are computed: the percentage of eye clos-
ening in time (PERCLOS), the eye closening average
speed (AECS), and the yawning frequency in time
(YawnFrec) (Zhu et al., 2004).
PERCLOS and AECS. These measures are charac-
teristic of the eyelid movement. The PERCLOS has
been already validated and it has been found that is
the most appropriated parameters to assess the driver-
related fatigue (Dinges et al., 1998). The AECS is a
good indicator of fatigue. This parameter is defined as
the time taken to completely close the eyes. The eye
openess is characterized by pupil’s shape. It can be
measure by taking the ellipse axis relationship. This
feature over the time is used to computed the PERC-
LOS (Zhu et al., 2004). In (Ji and Yang, 2002), it has
been shown that the AECS in a tired person is defi-
nitely different from a rested person.
YawnFrec. A tired person is characterized by ex-
pressing less facial expressions because there is min-
imal activity of facial muscles, but it also show more
open mouth. The mouth openness can be measure
by detecting the lips movements whether the features
around it deviate from its closed configuration. The
mouth openess is characterized by the proportion be-
tween the height and width to computed the Yawn-
Freq (Zhu et al., 2004).
3 FATIGUE ASSESSMENT BASED
ON COMPUTER VISION
People in a state of fatigue show some visual re-
sponses that are easily observable from changes in
their facial features like eyes and mouth. CV has
different non-invasive techniques for monitoring of
drivers (Wang et al., 2006). As Zhang (Zhang and
Zhang, 2006) reported, the CV-based systems for
monitoring drivers are the most promising commer-
ICINCO 2010 - 7th International Conference on Informatics in Control, Automation and Robotics
296
cial application for assessment of driver-related fa-
tigue (Zhang and Zhang, 2006), (Dong and Wu,
2005), (Saeed et al., 2007), (Zhu et al., 2004), (Ji and
Yang, 2002). However, CV is a challenging research
area due to the different factors such as: facial expres-
sions complexity, fast eye and head movements, and
changes in illumination conditions, etc. These factors
difficult the development of robust and real time im-
plementations of these kind of systems.
Basic Requiremets for A CV-Based System. Ac-
cording to (Dong and Wu, 2005), and (Smith et al.,
2003) a monitoring system to detect fatigue should
fulfill the following requirements: 1) fully automatic.
2) to be based on only quantitative features, such as,
the eye openness and closeness rate. 3) to work under
changing illumination conditions. 4) to work under
occlusion conditions that are frequent when head is
moved from the reference point. 5) real-time. 6) non-
invasive, without physical contact of the driver. 7)
before an accident it should detect and warn the early
occurrence of sleep.
Steps in A Visual System for Detection of Fatigue.
First, near infrared image sequences were acquired.
Second, mathematical morphology is used to improve
the images quality and the success rate in the follow-
ing steps. Third, the Haar algorithm is used to detect
the face and then the Gabor algorithm is used to de-
tect the person’s eyes. Then the distance between the
eyes is used reference to adjust the Candide model to
the person’s face. Fourth the AAM technique is used
to track the movements of the face. Finally the feature
extraction stage is performed using a model.
4 CHARACTERIZATION OF
FACE MOVEMENTS
The most important points to characterize a face de-
pend on the application and the facial model. In this
work the movements of the eyes and mouth are con-
sidered. In these sense, a whole face model is used in-
stead of a partial model of the eyes, as used in (Saeed
et al., 2007).
In order to get a quite realistic fitting of model
and detect the driver fatigue it’s need to define the ad-
justable vertices of the model Candide. These vertices
belong to the eyes and mouth ROIs where the parame-
ters PERCLOS, AECS, and YawnFrec are computed.
Additionally it is desirable that the movements of se-
lected vertices are controlled by facial action units,
these movements are part of the list of changes that
are presented when there is a movement of the ROI
encoded in terms of AUVs (Ahlberg, 2001).
Facial Motion Tracking. Previous to the computa-
tion of the parameters of fatigue the AAM models
are used to adjust the face model to different subject
faces. The AAM models are adjusted by machine
learning algorithms from available features (Cootes
et al., 1995). Then an alignment process is run to
adjust the AAM model (vertices update) to the input
images. To train the AMM an graphical user inter-
face (GUI) was developed, were the following steps
are done. 1) Scaling of the model to the face based
on the inter-eye distance, 2) Rotations in axes X, Y,
and Z to locate the model in the subject position,
3) Shape changes of the model regions are evaluated
(control by shaped units) or movements in regions of
the model (with vector control units of action, focus-
ing on the mouth openness AUV11 and eye closeness
AUV6) are evaluated too. 4) Points in the boundary
of the face and eyes and mouth are reviewed and re-
located. 5) The positions of the vertices of the fit-
ted model are stored for each image in the sequence.
Once the training algorithm is completed, a fitting al-
gorithm is used to adjust the AAM to minimize the
fitting error. For example, the decreasing gradient op-
timization can be used as described in Eq. 1, with the
model A
0
and the input image I(x). Once the model
is adjusted the extraction of fatigue features can start.
FittingError =
x
[I(x) A
0
(x)]
2
(1)
Eyelid and Lips Movements Characterization. To
compute the PERCLOS and AECS in (Ji and Yang,
2002) is proposed to continually track the pupil and
measure the eye closeness cumulatively over time, us-
ing the ratio of the vertices of the pupil ellipse. A sin-
gle eye closure is defined as the difference between
two periods of time in which the pupil size is 20% or
less of the normal size. The closing eye speed is de-
fined as the time period in which the pupil size is be-
tween 80% and 20% of the nominal size of the pupil.
In order to computed these two parameters more
accurately, an average time is used of all measure-
ments taken on a defined range (Ji and Yang, 2002).
To make these measures it is apply the method-
ology described in (Ji and Yang, 2002), but differ-
ent from this, eye closeness is computed using the
eye vertices which are defined in the model Candide
3. Specifically, we used some vertices of the eyelids.
Thus, the eye closeness is calculated as:
c
re
=
d (v
98
v
100
) + d (v
54
v
55
) + d (v
106
v
108
)
3
(2)
c
le
=
d (v
105
v
107
) + d (v
21
v
22
) + d (v
97
v
99
)
3
(3)
MODEL-BASED FEATURE EXTRACTION FOR ASSESSMENT OF DRIVER-RELATED FATIGUE
297
From Eq. 2 and 3, when the eye closeness is less than
or equal to 20% of the maximum distance between
eyelids is considered that the eyes are closed. Accord-
ing to the work developed in (Dong and Wu, 2005) if
the eyes are closed for 5 consecutive frames it can be
considered that the driver is falling asleep.
In order to compute the rate of mouth opening, it
is necessary to know the degree of the mouth open-
ness, which is represented by the relation between the
mouth’s height and width. The graphical represen-
tation of the mouth openness in a period of time is
know as YawnFrec. To compute the mouth openness
the vertices (right, left, upper, lower), of the mouth
in the model Candide 3 are used. In this sense, the
mouth openness is computed as follows:
OpenMouth =
d (v
7
v
8
)
d (v
64
v
31
)
(4)
5 EXPERIMENTAL
FRAMEWORK
Two different videos were acquired with 320x340 res-
olution. The subjects were instructed to blink and
yawn in different head positions range from 45
y
45
. The subject 1 blinked 35 times and yawned 5
times. The subject 2 blinked 24 and yawned 4 times.
The system was able to identified the total numbers
of yawns and blinks for each subject. This means
a 100% accuracy in detection of eye closeness and
mouth openness. Indeed the parameters PERCLOS,
AECS, and Yawnfrec could be determined every time.
In order to validate the presented methodology, an ex-
perimental framework is proposed. First, a commer-
cial driving simulator is going to be used and three
diferent physiological test will be set up (the PVT test,
the Stroop test, and the RT test). These tests are em-
bedded in the software PEBL (Open source Psychol-
ogy Software), available at “http://pebl.sf.net”. From
these tests some basic measures will be made, in sim-
ilar ambiental and physical conditions to those pre-
sented in the computer-vision-based methodology in
order to compare results and validate the parameters.
6 CONCLUSIONS
In this paper was proposed a characterization method-
ology based on models to assess fatigue. Two vari-
ables are measured by means of computer vision. The
eye closening range and the mouth opening range.
These measures are the base to calculate the param-
eters PERCLOS, AECS y YawnFrec. It has been
found that the proposed methodology is practical and
reliable whithin the previously described conditions.
Also an experimental framework based on psycholog-
ical measures was defined in order to validate the pro-
posed methodology. As future work, it is proposed to
complement the above methodology with estimation
of head’s position in order to compute additional pa-
rameters. This will provide more information to the
assessment of the driver fatigue.
REFERENCES
Ahlberg, J. (2001.). Candide-3 an updated parameterized
face. Report No. LiTH-ISY-R-2326, Dept. of Electri-
cal Engineering, Linkping University, Sweden.
Cootes, T. F., Taylor, C. J., Cooper, D. H., and Graham, J.
(1995). Active shape models-their training and appli-
cation. Comput. Vis. Image Underst., 61(1):38–59.
Dinges, D. F., Mallis, M., Maislin, G., and Powell, J. W.
(1998). Evaluation of techniques for ocular measure-
ment as an index of fatigue and the basis for alertness
management.
Dong, W. and Wu, X. (2005). Driver fatigue detection based
on the distance of eyelid. IEEE Int. Workshop VLSI
Design and Video Tech Suzhou, pp 365-368.
Ji, Q. and Yang, X. (2002). Real-time eye, gaze, and face
pose tracking for monitoring driver vigilance. Real-
Time Imaging, 8(5):357–377.
Saeed, I., Wang, A., Senaratne, R., and Halgamuge, S.
(2007). Using the active appearance model to detect
driver fatigue. In Information and Automation for Sus-
tainability, 2007. ICIAFS 2007. Third International
Conference on, pages 124 –128.
Saroj, K. and Lal, A. (2001.). A critical review of the psy-
chophysiology of driver fatigue. Biological Psychol-
ogy 55 173194.
Smith, P., Shah, M., and Vitoria, N. (2003.). Determining
driver visual attention with one camera. Trans. on In-
telligent Transportation Systems, 4(4):205–218.
Ting, P., Hwang, J., Doong, J., and Jeng, M. (2008.). Driver
fatigue and highway driving: A simulator study. Phys-
iology and Behavior 94 448-453.
Vural, E., Cetin, M., Ercil, A., Littlewort, G., Bartlett, M.,
and Movellan, J. (2007). Drowsy driver detection
through facial movement analysis. pages 6–18.
Wang, Q., Yang, J., Ren, M., and Zheng, Y. (2006). Driver
fatigue detection: A survey. Proceedings 6th World
Congr. Intell. Control Autom., Dalian, China, pp.
8587-8591.
Zhang, Z. and Zhang, J. (2006). Driver fatigue detection
based intelligent vehicle control. In ICPR ’06: Pro-
ceedings of the 18th International Conference on Pat-
tern Recognition, pages 1262–1265.
Zhu, Z., Ji, Q., and Lan, P. (2004). Real time non-intrusive
monitoring and prediction of driver fatigue. IEEE
Trans. Veh. Technol, 53:1052–1068.
ICINCO 2010 - 7th International Conference on Informatics in Control, Automation and Robotics
298