5 CONCLUSION
In this study, we investigated the effect of vehicle dy-
namics on driver attention to traffic signs and miss-
ing traffic signs during driving. Utilizing an accurate
object detector algorithm, YOLO-V4, and an accu-
rate algorithm to map the driver’s gaze to the for-
ward stereoscopic system, we calculated the inter-
section of the driver’s visual attention area and traf-
fic signs. We determine the number of missed traf-
fic signs, number of pre-attentive and attentive fixa-
tions at various speed ranges. The results indicate that
fewer traffic signs are missed at lower speeds and that
there are more pre-attention and attentive fixations at
lower speeds. The results also indicate that different
drivers have different behaviors regarding checking
traffic signs during driving. In future work, we will
look to employ our method on a larger and more di-
verse dataset. We would look to explore the potential
impact of environmental factors, e.g. day/night, fog,
harsh sunlight, rain, snow, etc. We also plan to inves-
tigate combining our method in fusion with data from
other sensors to improve the accuracy. Analyzing the
effect of characteristics of signs, e.g. shape, color, and
orientation, in missing traffic signs is another inter-
esting topic that can be investigated to provide more
information on this subject.
In considering the implications for ADAS, we
note that not all signs are equally important, e.g. a
stop sign is probably more important than a parking
sign. Thus we may want to focus on "critical" signs,
which may be dependent on the driving context. We
would like to implement our method in an equipped
car to be used an actual driving situations where we
can determine whether a driver misses a critical traf-
fic sign, such as a stop sign, and possibly warning the
driver.
REFERENCES
Acharya, S. and Nanda, P. K. (2021). Adjacent lbp and ltp
based background modeling with mixed-mode learn-
ing for foreground detection. Pattern Analysis and Ap-
plications, 24(3):1047–1074.
Bär, T., Linke, D., Nienhüser, D., and Zöllner, J. M. (2013).
Seen and missed traffic objects: A traffic object-
specific awareness estimation. In 2013 IEEE Intel-
ligent Vehicles Symposium (IV), pages 31–36. IEEE.
Beauchemin, S., Bauer, M., Laurendeau, D., Kowsari, T.,
Cho, J., Hunter, M., and McCarthy, O. (2010). Road-
lab: An in-vehicle laboratory for developing cognitive
cars. In Proc. 23rd Int. Conf. CAINE.
Bochkovskiy, A., Wang, C.-Y., and Liao, H.-Y. M. (2020).
Yolov4: Optimal speed and accuracy of object detec-
tion. arXiv preprint arXiv:2004.10934.
Bowden, V. K., Loft, S., Tatasciore, M., and Visser, T. A.
(2017). Lowering thresholds for speed limit enforce-
ment impairs peripheral object detection and increases
driver subjective workload. Accident Analysis & Pre-
vention, 98:118–122.
Calvert, S., Schakel, W., and Van Lint, J. (2017). Will auto-
mated vehicles negatively impact traffic flow? Journal
of advanced transportation, 2017.
´
Corovi
´
c, A., Ili
´
c, V., Ðuri
´
c, S., Marijan, M., and Pavkovi
´
c,
B. (2018). The real-time detection of traffic partici-
pants using yolo algorithm. In 2018 26th Telecommu-
nications Forum (TELFOR), pages 1–4. IEEE.
De Pelsmacker, P. and Janssens, W. (2007). The effect
of norms, attitudes and habits on speeding behavior:
Scale development and model building and estima-
tion. Accident Analysis & Prevention, 39(1):6–15.
Eboli, L., Mazzulla, G., and Pungillo, G. (2017). How
drivers’ characteristics can affect driving style. Trans-
portation research procedia, 27:945–952.
Freund, Y. and Schapire, R. E. (1997). A decision-theoretic
generalization of on-line learning and an application
to boosting. Journal of computer and system sciences,
55(1):119–139.
Girshick, R. (2015). Fast r-cnn. In Proceedings of the IEEE
international conference on computer vision, pages
1440–1448.
Gupta, A., Anpalagan, A., Guan, L., and Khwaja, A. S.
(2021). Deep learning for object detection and scene
perception in self-driving cars: Survey, challenges,
and open issues. Array, 10:100057.
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger,
K. Q. (2017). Densely connected convolutional net-
works. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 4700–
4708.
Just, M. A. and Carpenter, P. A. (1980). A theory of reading:
from eye fixations to comprehension. Psychological
review, 87(4):329.
Kowsari, T., Beauchemin, S. S., Bauer, M. A., Lauren-
deau, D., and Teasdale, N. (2014). Multi-depth cross-
calibration of remote eye gaze trackers and stereo-
scopic scene systems. In 2014 IEEE Intelligent
Vehicles Symposium Proceedings, pages 1245–1250.
IEEE.
Lin, C.-C. and Wang, M.-S. (2012). Road sign recognition
with fuzzy adaptive pre-processing models. Sensors,
12(5):6415–6433.
Liu, C., Tao, Y., Liang, J., Li, K., and Chen, Y. (2018). Ob-
ject detection based on yolo network. In 2018 IEEE
4th Information Technology and Mechatronics Engi-
neering Conference (ITOEC), pages 799–803. IEEE.
Liu, L., Ouyang, W., Wang, X., Fieguth, P., Chen, J., Liu,
X., and Pietikäinen, M. (2020). Deep learning for
generic object detection: A survey. International jour-
nal of computer vision, 128(2):261–318.
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.,
Fu, C.-Y., and Berg, A. C. (2016). Ssd: Single shot
multibox detector. In European conference on com-
puter vision, pages 21–37. Springer.
VISAPP 2023 - 18th International Conference on Computer Vision Theory and Applications
518