Authors:
Chenkai Zhang
;
Daisuke Deguchi
;
Jialei Chen
and
Hiroshi Murase
Affiliation:
Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464–8601, Japan
Keyword(s):
Autonomous Driving, Convolutional Neural Network, End-to-End Model, Explainability.
Abstract:
Deep learning technology has rapidly advanced, leading to the development of End-to-End driving models (E2EDMs) for autonomous vehicles with high prediction accuracy. To comprehend the prediction results of these E2EDMs, one of the most representative explanation methods is attribution-based. There are two kinds of attribution-based explanation methods: pixel-level and object-level. Usually, the heatmaps illustrate the importance of pixels and objects in the prediction results, serving as explanations for E2EDMs. Since there are many attribution-based explanation methods, evaluation methods are proposed to determine which one is better at improving the explainability of E2EDMs. Fidelity measures the explanation’s faithfulness to the model’s prediction method, which is a bottommost property. However, no evaluation method could measure the fidelity difference between object-level and pixel-level explanations, making the current evaluation incomplete. In addition, without considering fi
delity, previous evaluation methods may advertise manipulative explanations that solely seek human satisfaction (persuasibility). Therefore, we propose an evaluation method that further considers fidelity, our method enables a comprehensive evaluation that proves the object-level explanations genuinely outperform pixel-level explanations in fidelity and persuasibility, thus could better improve the explainability of the E2EDMs.
(More)