40.0 meters, it would have made the first car not initi-
ate the accident.
The car crashed into the Trash Container because the brakes were not
working, however, the obstacle detector was functioning with a hit radius
of 0.7 degrees, which made the car’s field of view wide enough to detect
the Trash Container and a detection distance of 60.0 meters, which gave
the car enough time to identify the obstacle from a far distance and make
a decision, but due to the brakes not working, the vehicle couldn’t stop to
avoid the accident. On the other hand, the car wouldn’t have crashed into
the Trash Container if the brakes were working and the obstacle detector
was functioning with a hit radius of 0.7 degrees, which will make the car’s
field of view be able to detect the Trash Container and a detection distance
of 33.0 meters, which will give the car enough time to identify the obstacle
from a far distance and make a decision of stopping the vehicle to avoid
the accident.
Figure 21: Insights for Stationary Objects Counterfactuals
Example 3.
The first car made an accident because the brakes were not working, and
the obstacle detector was also not functioning, so it continued to proceed
straight. As a result, it crashed into the second vehicle, which had its brakes
working but its obstacle detector was not functioning, which made it unable
to detect other obstacles and take a decision to avoid the accident instead it
continued to proceed straight. Therefore, the second vehicle also crashed
into the third vehicle, which had its brakes not working and its obstacle
detector not functioning, which made it unable to detect other obstacles
and take the decision to avoid the accident instead it continued to proceed
straight. On the other hand, the first car wouldn’t have made an accident if
the brakes were working and the obstacle detector was functioning with a
hit radius of 1.1 degrees, which will make the car’s field of view be able to
detect other obstacles and a detection distance of 40.0 meters, which will
give the car enough time to identify the obstacle from a far distance and
make a decision of stopping the vehicle to avoid the accident.
Figure 22: Insights for Chain Reactions Counterfactuals
Example 3.
5 CONCLUSION AND FUTURE
WORK
This research aimed to give users an explanation for
why an autonomous vehicle made a specific deci-
sion, particularly in car accident scenarios by provid-
ing counterfactual explanations and giving insights to
better explain the produced counterfactuals. Results
show that having distances between the original in-
stance and the generated counterfactual lower than
the distances between the original instance and a ref-
erence instance means that the produced counterfac-
tual is the nearest possible one to attain. Also, ac-
quiring the nearest neighbor reference instance is the
best choice for identifying if the generated counter-
factual falls in the decision boundary of the desired
class. Moreover, small changes between the origi-
nal instance and the counterfactual example achieve
the feasibility property of sparsity. Finally, obtain-
ing a high percentage for the generated counterfactual
means that it is plausible and further supports that it
led to the target class. Overall, the system developed
explains why a self-driving car made specific deci-
sions in a variety of car accident scenarios.
Future research must consider testing and evaluat-
ing other chain reaction cases that are more complex,
in order to identify more about the reasons behind an
autonomous vehicle’s made decisions. For example,
involving people crossing streets in the scene and see-
ing how the car behaves in such a situation.
REFERENCES
Ali, J., Khan, R., Ahmad, N., and Maqsood, I. (2012). Ran-
dom forests and decision trees. International Journal
of Computer Science Issues (IJCSI), 9(5):272.
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., and
Koltun, V. (2017). Carla: An open urban driving sim-
ulator. In Conference on robot learning, pages 1–16.
PMLR.
Islam, S. R., Eberle, W., Ghafoor, S. K., and Ahmed, M.
(2021). Explainable artificial intelligence approaches:
A survey. arXiv preprint arXiv:2101.09429.
Kullarni, V. and Sinha, P. (2013). Random forest classifier:
a survey and future research directions. Int. J. Adv.
Comput, 36(1):1144–1156.
Laugel, T., Lesot, M.-J., Marsala, C., and Detyniecki, M.
(2019). Issues with post-hoc counterfactual explana-
tions: a discussion. arXiv preprint arXiv:1906.04774.
Mart
´
ınez-D
´
ıaz, M. and Soriguera, F. (2018). Autonomous
vehicles: theoretical and practical challenges. Trans-
portation Research Procedia, 33:275–282.
Mothilal, R. K., Sharma, A., and Tan, C. (2020). Explain-
ing machine learning classifiers through diverse coun-
terfactual explanations. In Proceedings of the 2020
Conference on Fairness, Accountability, and Trans-
parency, pages 607–617.
Omeiza, D., Webb, H., Jirotka, M., and Kunze, L. (2021).
Explanations in autonomous driving: A survey. IEEE
Transactions on Intelligent Transportation Systems.
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V.,
Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P.,
Weiss, R., Dubourg, V., et al. (2011). Scikit-learn:
Machine learning in python. the Journal of machine
Learning research, 12:2825–2830.
Singh, V. (2021). Explainable ai metrics and properties
for evaluation and analysis of counterfactual explana-
tions: Explainable ai metrics and properties for evalu-
ation and analysis of counterfactual explanations.
Vilone, G. and Longo, L. (2021). Notions of explainability
and evaluation approaches for explainable artificial in-
telligence. Information Fusion, 76:89–106.
A Framework for Explaining Accident Scenarios for Self-Driving Cars
373