Interpretable Machine Learning for Modelling and Explaining Car Drivers' Behaviour: An Exploratory Analysis on Heterogeneous Data

Mir Riyanul Islam, Mobyen Uddin Ahmed, Shahina Begum

2023

Abstract

Understanding individual car drivers’ behavioural variations and heterogeneity is a significant aspect of developing car simulator technologies, which are widely used in transport safety. This also characterizes the heterogeneity in drivers’ behaviour in terms of risk and hurry, using both real-time on-track and in-simulator driving performance features. Machine learning (ML) interpretability has become increasingly crucial for identifying accurate and relevant structural relationships between spatial events and factors that explain drivers’ behaviour while being classified and the explanations for them are evaluated. However, the high predictive power of ML algorithms ignore the characteristics of non-stationary domain relationships in spatiotemporal data (e.g., dependence, heterogeneity), which can lead to incorrect interpretations and poor management decisions. This study addresses this critical issue of ‘interpretability’ in ML-based modelling of structural relationships between the events and corresponding features of the car drivers’ behavioural variations. In this work, an exploratory experiment is described that contains simulator and real driving concurrently with a goal to enhance the simulator technologies. Here, initially, with heterogeneous data, several analytic techniques for simulator bias in drivers’ behaviour have been explored. Afterwards, five different ML classifier models were developed to classify risk and hurry in drivers’ behaviour in real and simulator driving. Furthermore, two different feature attribution-based explanation models were developed to explain the decision from the classifiers. According to the results and observation, among the classifiers, Gradient Boosted Decision Trees performed best with a classification accuracy of 98.62%. After quantitative evaluation, among the feature attribution methods, the explanation from Shapley Additive Explanations (SHAP) was found to be more accurate. The use of different metrics for evaluating explanation methods and their outcome lay the path toward further research in enhancing the feature attribution methods.

Download


Paper Citation


in Harvard Style

Islam M., Ahmed M. and Begum S. (2023). Interpretable Machine Learning for Modelling and Explaining Car Drivers' Behaviour: An Exploratory Analysis on Heterogeneous Data. In Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART, ISBN 978-989-758-623-1, pages 392-404. DOI: 10.5220/0011801000003393


in Bibtex Style

@conference{icaart23,
author={Mir Riyanul Islam and Mobyen Uddin Ahmed and Shahina Begum},
title={Interpretable Machine Learning for Modelling and Explaining Car Drivers' Behaviour: An Exploratory Analysis on Heterogeneous Data},
booktitle={Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},
year={2023},
pages={392-404},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0011801000003393},
isbn={978-989-758-623-1},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,
TI - Interpretable Machine Learning for Modelling and Explaining Car Drivers' Behaviour: An Exploratory Analysis on Heterogeneous Data
SN - 978-989-758-623-1
AU - Islam M.
AU - Ahmed M.
AU - Begum S.
PY - 2023
SP - 392
EP - 404
DO - 10.5220/0011801000003393