
ACKNOWLEDGMENT
This work was supported by JST SPRING JP-
MJSP2125, JSPS KAKENHI Grant Number
23H03474, and JST CREST Grant Number JP-
MJCR22D1. The author Chenkai Zhang would like
to take this opportunity to thank the “Interdisciplinary
Frontier Next-Generation Researcher Program of the
Tokai Higher Education and Research System.”
REFERENCES
Arrieta, A. B., D
´
ıaz-Rodr
´
ıguez, N., Del Ser, J., Bennetot,
A., Tabik, S., Barbado, A., Garc
´
ıa, S., Gil-L
´
opez, S.,
Molina, D., Benjamins, R., et al. (2020). Explainable
artificial intelligence (xai): Concepts, taxonomies, op-
portunities and challenges toward responsible ai. In-
formation fusion, 58:82–115.
Bojarski, M., Choromanska, A., Choromanski, K., Firner,
B., Jackel, L., Muller, U., and Zieba, K. (2016a). Vi-
sualbackprop: visualizing cnns for autonomous driv-
ing. arXiv preprint arXiv:1611.05418, 2.
Bojarski, M., Del Testa, D., Dworakowski, D., Firner,
B., Flepp, B., Goyal, P., Jackel, L. D., Monfort,
M., Muller, U., Zhang, J., et al. (2016b). End to
end learning for self-driving cars. arXiv preprint
arXiv:1604.07316.
Cui, X., Lee, J. M., and Hsieh, J. (2019). An integrative 3c
evaluation framework for explainable artificial intelli-
gence.
Donahue, J., Anne Hendricks, L., Guadarrama, S.,
Rohrbach, M., Venugopalan, S., Saenko, K., and Dar-
rell, T. (2015). Long-term recurrent convolutional net-
works for visual recognition and description. In Pro-
ceedings of the IEEE conference on computer vision
and pattern recognition, pages 2625–2634.
Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter,
M., and Kagal, L. (2018a). Explaining explanations:
An approach to evaluating interpretability of machine
learning. arXiv preprint arXiv:1806.00069, page 118.
Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter,
M., and Kagal, L. (2018b). Explaining explanations:
An overview of interpretability of machine learning.
In 2018 IEEE 5th International Conference on data
science and advanced analytics (DSAA), pages 80–89.
IEEE.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Gian-
notti, F., and Pedreschi, D. (2018). A survey of meth-
ods for explaining black box models. ACM computing
surveys (CSUR), 51(5):1–42.
He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep resid-
ual learning for image recognition. In Proceedings of
the IEEE conference on computer vision and pattern
recognition, pages 770–778.
Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I.,
and Wong, W.-K. (2013). Too much, too little, or
just right? ways explanations impact end users’ men-
tal models. In 2013 IEEE Symposium on visual lan-
guages and human centric computing, pages 3–10.
IEEE.
Lage, I., Chen, E., He, J., Narayanan, M., Kim, B., Gersh-
man, S., and Doshi-Velez, F. (2019). An evaluation
of the human-interpretability of explanation. arXiv
preprint arXiv:1902.00006.
Lee, J. and Moray, N. (1992). Trust, control strategies and
allocation of function in human-machine systems. Er-
gonomics, 35(10):1243–1270.
Levinson, J., Askeland, J., Becker, J., Dolson, J., Held, D.,
Kammel, S., Kolter, J. Z., Langer, D., Pink, O., Pratt,
V., et al. (2011). Towards fully autonomous driving:
Systems and algorithms. In 2011 IEEE intelligent ve-
hicles symposium (IV), pages 163–168. IEEE.
Lipton, Z. C. (2018). The mythos of model interpretability:
In machine learning, the concept of interpretability is
both important and slippery. Queue, 16(3):31–57.
Mascharka, D., Tran, P., Soklaski, R., and Majumdar, A.
(2018). Transparency by design: Closing the gap be-
tween performance and interpretability in visual rea-
soning. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 4942–
4950.
McAllister, R., Gal, Y., Kendall, A., Van Der Wilk, M.,
Shah, A., Cipolla, R., and Weller, A. (2017). Con-
crete problems for autonomous vehicle safety: Ad-
vantages of bayesian deep learning. In Proceedings
of the Twenty-Sixth International Joint Conference on
Artificial Intelligence. International Joint Conferences
on Artificial Intelligence Organization.
Mohseni, S., Zarei, N., and Ragan, E. D. (2018). A survey
of evaluation methods and measures for interpretable
machine learning. arXiv preprint arXiv:1811.11839,
1:1–16.
Mohseni, S., Zarei, N., and Ragan, E. D. (2021). A mul-
tidisciplinary survey and framework for design and
evaluation of explainable ai systems. ACM Transac-
tions on Interactive Intelligent Systems (TiiS), 11(3-
4):1–45.
Petsiuk, V., Das, A., and Saenko, K. (2018). Rise: Ran-
domized input sampling for explanation of black-box
models. arXiv preprint arXiv:1806.07421.
Pomerleau, D. (1998). An autonomous land vehicle in a
neural network. Advances in neural information pro-
cessing systems, 1:1.
Ras, G., Xie, N., Van Gerven, M., and Doran, D. (2022).
Explainable deep learning: A field guide for the unini-
tiated. Journal of Artificial Intelligence Research,
73:329–396.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why
should i trust you?” explaining the predictions of any
classifier. In Proceedings of the 22nd ACM SIGKDD
international conference on knowledge discovery and
data mining, pages 1135–1144.
Scholl, B. J. (2001). Objects and attention: The state of the
art. Cognition, 80(1-2):1–46.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R.,
Parikh, D., and Batra, D. (2017). Grad-cam: Visual
explanations from deep networks via gradient-based
Comprehensive Evaluation of End-to-End Driving Model Explanations for Autonomous Vehicles
517