from 3361 to 3865 for example. The facial features
cannot be located before image compensation, but
the Viola-Jones like detector can detect the facial
features after image compensation.
Our proposed method using image compensation
has a very high location rate (94.9% on average)
except in the following situations. In frame 281 of
Figure 11, the driver moves his face forward, which
caused the shape distortion of his face and thus we
failed to locate his facial features. They will be
located again when the driver’s face returns to its
original position. When the vehicle moves in the
tunnel, the facial features are not visible in such low
illumination. Frame 2857 is a case where the facial
features are occluded (in this case by the driver’s
hand). Our tracking algorithm can predict the
expected position, but we still consider such a case
as a failed location.
Frame 281 Frame 1360 Frame 2857 Frame 3435
Figure 11: Illustrations of fail detection.
In frame 281, the driver moves his head forward and
outside the monitoring range. There is very
illumination in frame 1360, so we cannot see the
facial features, even after applying image
compensation. The facial features are occluded in
frame 2857. In frame 3435, the facial features
cannot be located due to low illumination and
occlusion.
6 CONCLUSIONS
While there have already been many vision systems
reported for detecting and monitoring driver
drowsiness or fatigue, few systems have utilized
chromatic images for their input data. In this article,
we presented a system for monitoring a driver’s
features using as input data color images acquired by
a video camera. Although colors provide rich
information, they suffer from low intensity and
brightness variation. Particularly in the case of
driving, due to the vehicle’s motion, the
environmental light projecting on the driver may
change rapidly. We introduced a method of image
compensation for handling these variations. This
process can significantly improve the location rate of
the facial features under traditionally poor
environments.
REFERENCES
Cheng, Y., 1995. Mean shift, mode seeking, and
clustering. In IEEE Trans. on Pattern Analysis and
Machine Intelligence, vol. 17, no. 8, pp. 790–799.
Cooray, S. and O’Connor, N., 2005. A hybrid technique
for face detection in color images. In IEEE
International Conference on Advanced Video and
Signal Based Surveillance, pp.253-258, Como.
Gonzalez, R. C. and Woods, R. E., 2007. Digital image
processing, Addison-Wesley Publishing Company.
Hayami, T., Matsunaga, K., Shidoji, K., and Matsuki, Y.,
2002. Detecting drowsiness while driving by
measuring eye movement - a pilot study. In Proc. of
IEEE 5
th
Int'l Conf. on Intelligent Transportation
Systems, pp. 156-161, Singapore.
Heishman, R. and Duric, Z., 2007. Using image flow to
detect eye blinking in color videos. In IEEE Workshop
on Applications of Computer Vision, pp. 52, Austin.
Hongo, H., Murata, A. and Yamamoto, K., 1997.
Consumer products user interface using face and eye
orientation. In IEEE International Symposiumon
Consumer Electronics, pp. 87-90.
Jabon, M. E., Bailenson, J. N., Pontikakis, E. Takayama,
L., Nass, C., 2011. Facial expression analysis for
predicting unsafe driving behavior. In IEEE Pervasive
Computing, vol. 10, no. 4.
Ji, Q., Zhu, Z. and Lan, P., 2004. Real-time nonintrusive
monitoring and prediction of driver fatigue. In IEEE
Trans. Vehicular Technology, vol. 53, no. 4, pp. 1052-
1068.
Kao, K. P., Lin, W. H., Fang, C. Y., Wang, J. M., Chang,
S. L., and Chen, S. W., 2010, Real-time vison-based
driver drowsiness/fatigue detection system. In
Vehicular Technology Conference (VTC 2010-Spring),
pp. 1-5, Taipei.
Ke, Y., Tang, X. and Jing, F., 2006. The design of high-
level features for photo quality assessment. In IEEE
Computer Society Conference on Computer Vision
and Pattern Recognition, vol. 1, pp. 419-426, New
York.
Lalonde, M., Byrns, D., Gagnon, L., Teasdale, N., and
Laurendeau, D., 2007. Real-time eye blinking
detection with GPU-based SIFT tracking. In Proc. of
4
th
Canadian Conf. on Computer and Robot Vision,
pp.481-487, Montreal.
Lukac, R. and Plataniotis, K. N., 2007. Color image
processing, methods and applications, CRC Press,
Taylor & Francis Group, New York.
Lyons, M.J., 2004. Facial gesture interfaces for expression
and communication. In IEEE Int’l Conf. on Systems
Man and Cybernetics, vol. 1, pp. 598-603, Kyoto.
McCall, J. C., Wipf, D. P., Trivedi, M. M., and Rao, B. D.,
2007. Lane change intent analysis using robust
operators and sparse Bayesian learning. In IEEE
Trans. on Intelligent Transportation Systems, vol. 8,
no. 3, pp. 431-440.
Oh, J. H. and Kwak, N., 2012. Recognition of a driver’s
gaze for vehicle headlamp control,” In IEEE Trans. on
Vehicular Technology, vol. 61, No. 5.
ImageCompensationforImprovingExtractionofDriver'sFacialFeatures
337